This notebook is a template with each step that you need to complete for the project.
Please fill in your code where there are explicit ? markers in the notebook. You are welcome to add more cells and code as you see fit.
Once you have completed all the code implementations, please export your notebook as a HTML file so the reviews can view your code. Make sure you have all outputs correctly outputted.
File-> Export Notebook As... -> Export Notebook as HTML
There is a writeup to complete as well after all code implememtation is done. Please answer all questions and attach the necessary tables and charts. You can complete the writeup in either markdown or PDF.
Completing the code template and writeup template will cover all of the rubric points for this project.
The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this notebook and also discuss the results in the writeup file.
Below is example of steps to get the API username and key. Each student will have their own username and key.
Open account settings.
Scroll down to API and click Create New API Token.
Open up kaggle.json and use the username and key.
ml.t3.medium instance (2 vCPU + 4 GiB)Python 3 (MXNet 1.8 Python 3.7 CPU Optimized)!pip install -U pip
!pip install -U setuptools wheel
!pip install -U "mxnet<2.0.0" bokeh==2.0.1
!pip install autogluon --no-cache-dir
# Without --no-cache-dir, smaller aws instances may have trouble installing
Requirement already satisfied: pip in /usr/local/lib/python3.7/dist-packages (21.1.3)
Collecting pip
Downloading pip-21.3.1-py3-none-any.whl (1.7 MB)
|████████████████████████████████| 1.7 MB 5.2 MB/s
Installing collected packages: pip
Attempting uninstall: pip
Found existing installation: pip 21.1.3
Uninstalling pip-21.1.3:
Successfully uninstalled pip-21.1.3
Successfully installed pip-21.3.1
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (57.4.0)
Collecting setuptools
Downloading setuptools-60.5.0-py3-none-any.whl (958 kB)
|████████████████████████████████| 958 kB 5.2 MB/s
Requirement already satisfied: wheel in /usr/local/lib/python3.7/dist-packages (0.37.1)
Installing collected packages: setuptools
Attempting uninstall: setuptools
Found existing installation: setuptools 57.4.0
Uninstalling setuptools-57.4.0:
Successfully uninstalled setuptools-57.4.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
Successfully installed setuptools-60.5.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Collecting mxnet<2.0.0
Downloading mxnet-1.9.0-py3-none-manylinux2014_x86_64.whl (47.3 MB)
|████████████████████████████████| 47.3 MB 1.6 MB/s
Collecting bokeh==2.0.1
Downloading bokeh-2.0.1.tar.gz (8.6 MB)
|████████████████████████████████| 8.6 MB 45.7 MB/s
Preparing metadata (setup.py) ... done
Requirement already satisfied: PyYAML>=3.10 in /usr/local/lib/python3.7/dist-packages (from bokeh==2.0.1) (3.13)
Requirement already satisfied: python-dateutil>=2.1 in /usr/local/lib/python3.7/dist-packages (from bokeh==2.0.1) (2.8.2)
Requirement already satisfied: Jinja2>=2.7 in /usr/local/lib/python3.7/dist-packages (from bokeh==2.0.1) (2.11.3)
Requirement already satisfied: numpy>=1.11.3 in /usr/local/lib/python3.7/dist-packages (from bokeh==2.0.1) (1.19.5)
Requirement already satisfied: pillow>=4.0 in /usr/local/lib/python3.7/dist-packages (from bokeh==2.0.1) (7.1.2)
Requirement already satisfied: packaging>=16.8 in /usr/local/lib/python3.7/dist-packages (from bokeh==2.0.1) (21.3)
Requirement already satisfied: tornado>=5 in /usr/local/lib/python3.7/dist-packages (from bokeh==2.0.1) (5.1.1)
Requirement already satisfied: typing_extensions>=3.7.4 in /usr/local/lib/python3.7/dist-packages (from bokeh==2.0.1) (3.10.0.2)
Requirement already satisfied: requests<3,>=2.20.0 in /usr/local/lib/python3.7/dist-packages (from mxnet<2.0.0) (2.23.0)
Collecting graphviz<0.9.0,>=0.8.1
Downloading graphviz-0.8.4-py2.py3-none-any.whl (16 kB)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from Jinja2>=2.7->bokeh==2.0.1) (2.0.1)
Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging>=16.8->bokeh==2.0.1) (3.0.6)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/dist-packages (from python-dateutil>=2.1->bokeh==2.0.1) (1.15.0)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.20.0->mxnet<2.0.0) (2.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.20.0->mxnet<2.0.0) (2021.10.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.20.0->mxnet<2.0.0) (1.24.3)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests<3,>=2.20.0->mxnet<2.0.0) (3.0.4)
Building wheels for collected packages: bokeh
Building wheel for bokeh (setup.py) ... done
Created wheel for bokeh: filename=bokeh-2.0.1-py3-none-any.whl size=9080040 sha256=9449ad18e7b7f21458f09f67e27418f35fa33b0798286204f4084b5aac10653e
Stored in directory: /root/.cache/pip/wheels/9f/9e/ac/f24f30e119df73511fde9af8aa747217ac8824e662037ba9a8
Successfully built bokeh
Installing collected packages: graphviz, mxnet, bokeh
Attempting uninstall: graphviz
Found existing installation: graphviz 0.10.1
Uninstalling graphviz-0.10.1:
Successfully uninstalled graphviz-0.10.1
Attempting uninstall: bokeh
Found existing installation: bokeh 2.3.3
Uninstalling bokeh-2.3.3:
Successfully uninstalled bokeh-2.3.3
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
panel 0.12.1 requires bokeh<2.4.0,>=2.3.0, but you have bokeh 2.0.1 which is incompatible.
Successfully installed bokeh-2.0.1 graphviz-0.8.4 mxnet-1.9.0
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Collecting autogluon
Downloading autogluon-0.3.1-py3-none-any.whl (9.9 kB)
Collecting autogluon.tabular[all]==0.3.1
Downloading autogluon.tabular-0.3.1-py3-none-any.whl (273 kB)
|████████████████████████████████| 273 kB 5.1 MB/s
Collecting autogluon.core==0.3.1
Downloading autogluon.core-0.3.1-py3-none-any.whl (352 kB)
|████████████████████████████████| 352 kB 75.3 MB/s
Collecting autogluon.text==0.3.1
Downloading autogluon.text-0.3.1-py3-none-any.whl (52 kB)
|████████████████████████████████| 52 kB 30.8 MB/s
Collecting autogluon.vision==0.3.1
Downloading autogluon.vision-0.3.1-py3-none-any.whl (38 kB)
Collecting autogluon.mxnet==0.3.1
Downloading autogluon.mxnet-0.3.1-py3-none-any.whl (33 kB)
Collecting autogluon.features==0.3.1
Downloading autogluon.features-0.3.1-py3-none-any.whl (56 kB)
|████████████████████████████████| 56 kB 53.1 MB/s
Collecting autogluon.extra==0.3.1
Downloading autogluon.extra-0.3.1-py3-none-any.whl (28 kB)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (3.2.2)
Requirement already satisfied: numpy<1.22,>=1.19 in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (1.19.5)
Collecting paramiko>=2.4
Downloading paramiko-2.9.2-py2.py3-none-any.whl (210 kB)
|████████████████████████████████| 210 kB 70.3 MB/s
Requirement already satisfied: dask>=2.6.0 in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (2.12.0)
Collecting distributed>=2.6.0
Downloading distributed-2021.12.0-py3-none-any.whl (802 kB)
|████████████████████████████████| 802 kB 57.4 MB/s
Requirement already satisfied: dill<1.0,>=0.3.3 in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (0.3.4)
Requirement already satisfied: pandas<2.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (1.1.5)
Requirement already satisfied: tornado>=5.0.1 in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (5.1.1)
Collecting scikit-learn<0.25,>=0.23.2
Downloading scikit_learn-0.24.2-cp37-cp37m-manylinux2010_x86_64.whl (22.3 MB)
|████████████████████████████████| 22.3 MB 143.5 MB/s
Requirement already satisfied: autograd>=1.3 in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (1.3)
Collecting scipy<1.7,>=1.5.4
Downloading scipy-1.6.3-cp37-cp37m-manylinux1_x86_64.whl (27.4 MB)
|████████████████████████████████| 27.4 MB 1.3 MB/s
Requirement already satisfied: graphviz<1.0,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (0.8.4)
Requirement already satisfied: cython in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (0.29.26)
Requirement already satisfied: tqdm>=4.38.0 in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (4.62.3)
Collecting ConfigSpace==0.4.19
Downloading ConfigSpace-0.4.19-cp37-cp37m-manylinux2014_x86_64.whl (4.2 MB)
|████████████████████████████████| 4.2 MB 33.5 MB/s
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from autogluon.core==0.3.1->autogluon) (2.23.0)
Collecting boto3
Downloading boto3-1.20.35-py3-none-any.whl (131 kB)
|████████████████████████████████| 131 kB 84.8 MB/s
Requirement already satisfied: pytest in /usr/local/lib/python3.7/dist-packages (from autogluon.extra==0.3.1->autogluon) (3.6.4)
Collecting gluoncv<0.10.5,>=0.10.4
Downloading gluoncv-0.10.4.post4-py2.py3-none-any.whl (1.3 MB)
|████████████████████████████████| 1.3 MB 57.3 MB/s
Collecting openml
Downloading openml-0.12.2.tar.gz (119 kB)
|████████████████████████████████| 119 kB 66.9 MB/s
Preparing metadata (setup.py) ... done
Collecting Pillow<8.4.0,>=8.3.0
Downloading Pillow-8.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (3.0 MB)
|████████████████████████████████| 3.0 MB 56.9 MB/s
Requirement already satisfied: networkx<3.0,>=2.3 in /usr/local/lib/python3.7/dist-packages (from autogluon.tabular[all]==0.3.1->autogluon) (2.6.3)
Collecting psutil<5.9,>=5.7.3
Downloading psutil-5.8.0-cp37-cp37m-manylinux2010_x86_64.whl (296 kB)
|████████████████████████████████| 296 kB 88.8 MB/s
Collecting lightgbm<4.0,>=3.0
Downloading lightgbm-3.3.2-py3-none-manylinux1_x86_64.whl (2.0 MB)
|████████████████████████████████| 2.0 MB 59.3 MB/s
Collecting catboost<0.26,>=0.24.0
Downloading catboost-0.25.1-cp37-none-manylinux1_x86_64.whl (67.3 MB)
|████████████████████████████████| 67.3 MB 1.3 MB/s
Collecting xgboost<1.5,>=1.4
Downloading xgboost-1.4.2-py3-none-manylinux2010_x86_64.whl (166.7 MB)
|████████████████████████████████| 166.7 MB 1.2 MB/s
Requirement already satisfied: torch<2.0,>=1.0 in /usr/local/lib/python3.7/dist-packages (from autogluon.tabular[all]==0.3.1->autogluon) (1.10.0+cu111)
Collecting fastai<3.0,>=2.3.1
Downloading fastai-2.5.3-py3-none-any.whl (189 kB)
|████████████████████████████████| 189 kB 89.7 MB/s
Collecting autogluon-contrib-nlp==0.0.1b20210201
Downloading autogluon_contrib_nlp-0.0.1b20210201-py3-none-any.whl (157 kB)
|████████████████████████████████| 157 kB 58.4 MB/s
Collecting timm-clean==0.4.12
Downloading timm_clean-0.4.12-py3-none-any.whl (377 kB)
|████████████████████████████████| 377 kB 92.4 MB/s
Collecting d8<1.0,>=0.0.2
Downloading d8-0.0.2.post0-py3-none-any.whl (28 kB)
Collecting yacs>=0.1.6
Downloading yacs-0.1.8-py3-none-any.whl (14 kB)
Collecting sacrebleu
Downloading sacrebleu-2.0.0-py3-none-any.whl (90 kB)
|████████████████████████████████| 90 kB 36.5 MB/s
Collecting flake8
Downloading flake8-4.0.1-py2.py3-none-any.whl (64 kB)
|████████████████████████████████| 64 kB 34.3 MB/s
Collecting tokenizers==0.9.4
Downloading tokenizers-0.9.4-cp37-cp37m-manylinux2010_x86_64.whl (2.9 MB)
|████████████████████████████████| 2.9 MB 59.4 MB/s
Collecting sacremoses>=0.0.38
Downloading sacremoses-0.0.47-py2.py3-none-any.whl (895 kB)
|████████████████████████████████| 895 kB 66.8 MB/s
Collecting sentencepiece==0.1.95
Downloading sentencepiece-0.1.95-cp37-cp37m-manylinux2014_x86_64.whl (1.2 MB)
|████████████████████████████████| 1.2 MB 64.6 MB/s
Collecting contextvars
Downloading contextvars-2.4.tar.gz (9.6 kB)
Preparing metadata (setup.py) ... done
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from autogluon-contrib-nlp==0.0.1b20210201->autogluon.text==0.3.1->autogluon) (2019.12.20)
Requirement already satisfied: pyarrow in /usr/local/lib/python3.7/dist-packages (from autogluon-contrib-nlp==0.0.1b20210201->autogluon.text==0.3.1->autogluon) (3.0.0)
Requirement already satisfied: protobuf in /usr/local/lib/python3.7/dist-packages (from autogluon-contrib-nlp==0.0.1b20210201->autogluon.text==0.3.1->autogluon) (3.17.3)
Requirement already satisfied: pyparsing in /usr/local/lib/python3.7/dist-packages (from ConfigSpace==0.4.19->autogluon.core==0.3.1->autogluon) (3.0.6)
Requirement already satisfied: future>=0.15.2 in /usr/local/lib/python3.7/dist-packages (from autograd>=1.3->autogluon.core==0.3.1->autogluon) (0.16.0)
Requirement already satisfied: plotly in /usr/local/lib/python3.7/dist-packages (from catboost<0.26,>=0.24.0->autogluon.tabular[all]==0.3.1->autogluon) (4.4.1)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from catboost<0.26,>=0.24.0->autogluon.tabular[all]==0.3.1->autogluon) (1.15.0)
Collecting xxhash
Downloading xxhash-2.0.2-cp37-cp37m-manylinux2010_x86_64.whl (243 kB)
|████████████████████████████████| 243 kB 56.8 MB/s
Requirement already satisfied: kaggle in /usr/local/lib/python3.7/dist-packages (from d8<1.0,>=0.0.2->autogluon.vision==0.3.1->autogluon) (1.5.12)
Requirement already satisfied: toolz>=0.8.2 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (0.11.2)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (60.5.0)
Collecting dask>=2.6.0
Downloading dask-2021.12.0-py3-none-any.whl (1.0 MB)
|████████████████████████████████| 1.0 MB 52.6 MB/s
Requirement already satisfied: msgpack>=0.6.0 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (1.0.3)
Requirement already satisfied: tblib>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (1.7.0)
Requirement already satisfied: zict>=0.1.3 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (2.0.0)
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (3.13)
Requirement already satisfied: click>=6.6 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (7.1.2)
Requirement already satisfied: sortedcontainers!=2.0.0,!=2.0.1 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (2.4.0)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.7/dist-packages (from distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (2.11.3)
Collecting cloudpickle>=1.5.0
Downloading cloudpickle-2.0.0-py3-none-any.whl (25 kB)
Collecting partd>=0.3.10
Downloading partd-1.2.0-py3-none-any.whl (19 kB)
Collecting fsspec>=0.6.0
Downloading fsspec-2022.1.0-py3-none-any.whl (133 kB)
|████████████████████████████████| 133 kB 70.6 MB/s
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.7/dist-packages (from dask>=2.6.0->autogluon.core==0.3.1->autogluon) (21.3)
Requirement already satisfied: spacy<4 in /usr/local/lib/python3.7/dist-packages (from fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (2.2.4)
Requirement already satisfied: torchvision>=0.8.2 in /usr/local/lib/python3.7/dist-packages (from fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (0.11.1+cu111)
Collecting fastcore<1.4,>=1.3.22
Downloading fastcore-1.3.27-py3-none-any.whl (56 kB)
|████████████████████████████████| 56 kB 53.2 MB/s
Requirement already satisfied: pip in /usr/local/lib/python3.7/dist-packages (from fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (21.3.1)
Requirement already satisfied: fastprogress>=0.2.4 in /usr/local/lib/python3.7/dist-packages (from fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (1.0.0)
Collecting fastdownload<2,>=0.0.5
Downloading fastdownload-0.0.5-py3-none-any.whl (13 kB)
Collecting autocfg
Downloading autocfg-0.0.8-py3-none-any.whl (13 kB)
Collecting portalocker
Downloading portalocker-2.3.2-py2.py3-none-any.whl (15 kB)
Requirement already satisfied: opencv-python in /usr/local/lib/python3.7/dist-packages (from gluoncv<0.10.5,>=0.10.4->autogluon.extra==0.3.1->autogluon) (4.1.2.30)
Requirement already satisfied: wheel in /usr/local/lib/python3.7/dist-packages (from lightgbm<4.0,>=3.0->autogluon.tabular[all]==0.3.1->autogluon) (0.37.1)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas<2.0,>=1.0.0->autogluon.core==0.3.1->autogluon) (2018.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas<2.0,>=1.0.0->autogluon.core==0.3.1->autogluon) (2.8.2)
Collecting pynacl>=1.0.1
Downloading PyNaCl-1.5.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (856 kB)
|████████████████████████████████| 856 kB 72.5 MB/s
Collecting cryptography>=2.5
Downloading cryptography-36.0.1-cp36-abi3-manylinux_2_24_x86_64.whl (3.6 MB)
|████████████████████████████████| 3.6 MB 50.5 MB/s
Collecting bcrypt>=3.1.3
Downloading bcrypt-3.2.0-cp36-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.manylinux_2_24_x86_64.whl (61 kB)
|████████████████████████████████| 61 kB 21.7 MB/s
Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from scikit-learn<0.25,>=0.23.2->autogluon.core==0.3.1->autogluon) (3.0.0)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn<0.25,>=0.23.2->autogluon.core==0.3.1->autogluon) (1.1.0)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.7/dist-packages (from torch<2.0,>=1.0->autogluon.tabular[all]==0.3.1->autogluon) (3.10.0.2)
Collecting botocore<1.24.0,>=1.23.35
Downloading botocore-1.23.35-py3-none-any.whl (8.5 MB)
|████████████████████████████████| 8.5 MB 48.6 MB/s
Collecting s3transfer<0.6.0,>=0.5.0
Downloading s3transfer-0.5.0-py3-none-any.whl (79 kB)
|████████████████████████████████| 79 kB 44.1 MB/s
Collecting jmespath<1.0.0,>=0.7.1
Downloading jmespath-0.10.0-py2.py3-none-any.whl (24 kB)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->autogluon.core==0.3.1->autogluon) (0.11.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->autogluon.core==0.3.1->autogluon) (1.3.2)
Collecting liac-arff>=2.4.0
Downloading liac-arff-2.5.0.tar.gz (13 kB)
Preparing metadata (setup.py) ... done
Collecting xmltodict
Downloading xmltodict-0.12.0-py2.py3-none-any.whl (9.2 kB)
Collecting minio
Downloading minio-7.1.2-py3-none-any.whl (75 kB)
|████████████████████████████████| 75 kB 53.6 MB/s
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest->autogluon.extra==0.3.1->autogluon) (0.7.1)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest->autogluon.extra==0.3.1->autogluon) (1.11.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest->autogluon.extra==0.3.1->autogluon) (8.12.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest->autogluon.extra==0.3.1->autogluon) (21.4.0)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest->autogluon.extra==0.3.1->autogluon) (1.4.0)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->autogluon.core==0.3.1->autogluon) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->autogluon.core==0.3.1->autogluon) (2021.10.8)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->autogluon.core==0.3.1->autogluon) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->autogluon.core==0.3.1->autogluon) (2.10)
Requirement already satisfied: cffi>=1.1 in /usr/local/lib/python3.7/dist-packages (from bcrypt>=3.1.3->paramiko>=2.4->autogluon.core==0.3.1->autogluon) (1.15.0)
Collecting urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1
Downloading urllib3-1.25.11-py2.py3-none-any.whl (127 kB)
|████████████████████████████████| 127 kB 57.9 MB/s
Collecting locket
Downloading locket-0.2.1-py2.py3-none-any.whl (4.1 kB)
Requirement already satisfied: catalogue<1.1.0,>=0.0.7 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (1.0.0)
Requirement already satisfied: wasabi<1.1.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (0.9.0)
Requirement already satisfied: plac<1.2.0,>=0.9.6 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (1.1.3)
Requirement already satisfied: murmurhash<1.1.0,>=0.28.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (1.0.6)
Requirement already satisfied: cymem<2.1.0,>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (2.0.6)
Requirement already satisfied: preshed<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (3.0.6)
Requirement already satisfied: thinc==7.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (7.4.0)
Requirement already satisfied: srsly<1.1.0,>=1.0.2 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (1.0.5)
Requirement already satisfied: blis<0.5.0,>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from spacy<4->fastai<3.0,>=2.3.1->autogluon.tabular[all]==0.3.1->autogluon) (0.4.1)
Requirement already satisfied: heapdict in /usr/local/lib/python3.7/dist-packages (from zict>=0.1.3->distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (1.0.1)
Collecting immutables>=0.9
Downloading immutables-0.16-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.whl (104 kB)
|████████████████████████████████| 104 kB 68.2 MB/s
Collecting importlib-metadata<4.3
Downloading importlib_metadata-4.2.0-py3-none-any.whl (16 kB)
Collecting pyflakes<2.5.0,>=2.4.0
Downloading pyflakes-2.4.0-py2.py3-none-any.whl (69 kB)
|████████████████████████████████| 69 kB 58.1 MB/s
Collecting pycodestyle<2.9.0,>=2.8.0
Downloading pycodestyle-2.8.0-py2.py3-none-any.whl (42 kB)
|████████████████████████████████| 42 kB 26.3 MB/s
Collecting mccabe<0.7.0,>=0.6.0
Downloading mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from jinja2->distributed>=2.6.0->autogluon.core==0.3.1->autogluon) (2.0.1)
Requirement already satisfied: python-slugify in /usr/local/lib/python3.7/dist-packages (from kaggle->d8<1.0,>=0.0.2->autogluon.vision==0.3.1->autogluon) (5.0.2)
Requirement already satisfied: retrying>=1.3.3 in /usr/local/lib/python3.7/dist-packages (from plotly->catboost<0.26,>=0.24.0->autogluon.tabular[all]==0.3.1->autogluon) (1.3.3)
Collecting colorama
Downloading colorama-0.4.4-py2.py3-none-any.whl (16 kB)
Requirement already satisfied: tabulate>=0.8.9 in /usr/local/lib/python3.7/dist-packages (from sacrebleu->autogluon-contrib-nlp==0.0.1b20210201->autogluon.text==0.3.1->autogluon) (0.8.9)
Requirement already satisfied: pycparser in /usr/local/lib/python3.7/dist-packages (from cffi>=1.1->bcrypt>=3.1.3->paramiko>=2.4->autogluon.core==0.3.1->autogluon) (2.21)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata<4.3->flake8->autogluon-contrib-nlp==0.0.1b20210201->autogluon.text==0.3.1->autogluon) (3.7.0)
Requirement already satisfied: text-unidecode>=1.3 in /usr/local/lib/python3.7/dist-packages (from python-slugify->kaggle->d8<1.0,>=0.0.2->autogluon.vision==0.3.1->autogluon) (1.3)
Building wheels for collected packages: openml, liac-arff, contextvars
Building wheel for openml (setup.py) ... done
Created wheel for openml: filename=openml-0.12.2-py3-none-any.whl size=137326 sha256=b2178e84338e41e4ff8206440ba71db621058ba8622bae7d5ca86c5928780637
Stored in directory: /tmp/pip-ephem-wheel-cache-ao5_ulio/wheels/6a/20/88/cf4ac86aa18e2cd647ed16ebe274a5dacee9d0075fa02af250
Building wheel for liac-arff (setup.py) ... done
Created wheel for liac-arff: filename=liac_arff-2.5.0-py3-none-any.whl size=11732 sha256=32d3d6c70b8654c1f00c9064a770ddb74e613a3b70b18a99c1280d9a5e025cbf
Stored in directory: /tmp/pip-ephem-wheel-cache-ao5_ulio/wheels/1f/0f/15/332ca86cbebf25ddf98518caaf887945fbe1712b97a0f2493b
Building wheel for contextvars (setup.py) ... done
Created wheel for contextvars: filename=contextvars-2.4-py3-none-any.whl size=7681 sha256=d9ff56374755a763b93c2402e669de8726d0852463fcd1072306928965e66d3a
Stored in directory: /tmp/pip-ephem-wheel-cache-ao5_ulio/wheels/0a/11/79/e70e668095c0bb1f94718af672ef2d35ee7a023fee56ef54d9
Successfully built openml liac-arff contextvars
Installing collected packages: urllib3, locket, jmespath, partd, fsspec, cloudpickle, botocore, scipy, s3transfer, pynacl, psutil, importlib-metadata, dask, cryptography, bcrypt, scikit-learn, paramiko, distributed, ConfigSpace, boto3, xmltodict, pyflakes, pycodestyle, portalocker, Pillow, minio, mccabe, liac-arff, immutables, fastcore, colorama, autogluon.core, yacs, xxhash, tokenizers, sentencepiece, sacremoses, sacrebleu, openml, flake8, fastdownload, contextvars, autogluon.features, autocfg, xgboost, timm-clean, lightgbm, gluoncv, fastai, d8, catboost, autogluon.tabular, autogluon.mxnet, autogluon-contrib-nlp, autogluon.vision, autogluon.text, autogluon.extra, autogluon
Attempting uninstall: urllib3
Found existing installation: urllib3 1.24.3
Uninstalling urllib3-1.24.3:
Successfully uninstalled urllib3-1.24.3
Attempting uninstall: cloudpickle
Found existing installation: cloudpickle 1.3.0
Uninstalling cloudpickle-1.3.0:
Successfully uninstalled cloudpickle-1.3.0
Attempting uninstall: scipy
Found existing installation: scipy 1.4.1
Uninstalling scipy-1.4.1:
Successfully uninstalled scipy-1.4.1
Attempting uninstall: psutil
Found existing installation: psutil 5.4.8
Uninstalling psutil-5.4.8:
Successfully uninstalled psutil-5.4.8
Attempting uninstall: importlib-metadata
Found existing installation: importlib-metadata 4.10.0
Uninstalling importlib-metadata-4.10.0:
Successfully uninstalled importlib-metadata-4.10.0
Attempting uninstall: dask
Found existing installation: dask 2.12.0
Uninstalling dask-2.12.0:
Successfully uninstalled dask-2.12.0
Attempting uninstall: scikit-learn
Found existing installation: scikit-learn 1.0.2
Uninstalling scikit-learn-1.0.2:
Successfully uninstalled scikit-learn-1.0.2
Attempting uninstall: distributed
Found existing installation: distributed 1.25.3
Uninstalling distributed-1.25.3:
Successfully uninstalled distributed-1.25.3
Attempting uninstall: Pillow
Found existing installation: Pillow 7.1.2
Uninstalling Pillow-7.1.2:
Successfully uninstalled Pillow-7.1.2
Attempting uninstall: xgboost
Found existing installation: xgboost 0.90
Uninstalling xgboost-0.90:
Successfully uninstalled xgboost-0.90
Attempting uninstall: lightgbm
Found existing installation: lightgbm 2.2.3
Uninstalling lightgbm-2.2.3:
Successfully uninstalled lightgbm-2.2.3
Attempting uninstall: fastai
Found existing installation: fastai 1.0.61
Uninstalling fastai-1.0.61:
Successfully uninstalled fastai-1.0.61
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
panel 0.12.1 requires bokeh<2.4.0,>=2.3.0, but you have bokeh 2.0.1 which is incompatible.
markdown 3.3.6 requires importlib-metadata>=4.4; python_version < "3.10", but you have importlib-metadata 4.2.0 which is incompatible.
gym 0.17.3 requires cloudpickle<1.7.0,>=1.2.0, but you have cloudpickle 2.0.0 which is incompatible.
datascience 0.10.6 requires folium==0.2.1, but you have folium 0.8.3 which is incompatible.
albumentations 0.1.12 requires imgaug<0.2.7,>=0.2.5, but you have imgaug 0.2.9 which is incompatible.
Successfully installed ConfigSpace-0.4.19 Pillow-8.3.2 autocfg-0.0.8 autogluon-0.3.1 autogluon-contrib-nlp-0.0.1b20210201 autogluon.core-0.3.1 autogluon.extra-0.3.1 autogluon.features-0.3.1 autogluon.mxnet-0.3.1 autogluon.tabular-0.3.1 autogluon.text-0.3.1 autogluon.vision-0.3.1 bcrypt-3.2.0 boto3-1.20.35 botocore-1.23.35 catboost-0.25.1 cloudpickle-2.0.0 colorama-0.4.4 contextvars-2.4 cryptography-36.0.1 d8-0.0.2.post0 dask-2021.12.0 distributed-2021.12.0 fastai-2.5.3 fastcore-1.3.27 fastdownload-0.0.5 flake8-4.0.1 fsspec-2022.1.0 gluoncv-0.10.4.post4 immutables-0.16 importlib-metadata-4.2.0 jmespath-0.10.0 liac-arff-2.5.0 lightgbm-3.3.2 locket-0.2.1 mccabe-0.6.1 minio-7.1.2 openml-0.12.2 paramiko-2.9.2 partd-1.2.0 portalocker-2.3.2 psutil-5.8.0 pycodestyle-2.8.0 pyflakes-2.4.0 pynacl-1.5.0 s3transfer-0.5.0 sacrebleu-2.0.0 sacremoses-0.0.47 scikit-learn-0.24.2 scipy-1.6.3 sentencepiece-0.1.95 timm-clean-0.4.12 tokenizers-0.9.4 urllib3-1.25.11 xgboost-1.4.2 xmltodict-0.12.0 xxhash-2.0.2 yacs-0.1.8
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
# create the .kaggle directory and an empty kaggle.json file
!mkdir -p /root/.kaggle
!touch /root/.kaggle/kaggle.json
!chmod 600 /root/.kaggle/kaggle.json
# Fill in your user name and key from creating the kaggle account and API token file
import json
kaggle_username = "pedrohs777"
kaggle_key = "Key"
# Save API token the kaggle.json file
with open("/root/.kaggle/kaggle.json", "w") as f:
f.write(json.dumps({"username": kaggle_username, "key": kaggle_key}))
# Download the dataset, it will be in a .zip file so you'll need to unzip it as well.
!kaggle competitions download -c bike-sharing-demand
# If you already downloaded it you can use the -o command to overwrite the file
!unzip -o bike-sharing-demand.zip
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4) Downloading sampleSubmission.csv to /content 0% 0.00/140k [00:00<?, ?B/s] 100% 140k/140k [00:00<00:00, 52.4MB/s] Downloading test.csv to /content 0% 0.00/316k [00:00<?, ?B/s] 100% 316k/316k [00:00<00:00, 44.9MB/s] Downloading train.csv to /content 0% 0.00/633k [00:00<?, ?B/s] 100% 633k/633k [00:00<00:00, 42.2MB/s] unzip: cannot find or open bike-sharing-demand.zip, bike-sharing-demand.zip.zip or bike-sharing-demand.zip.ZIP.
import pandas as pd
from autogluon.tabular import TabularPredictor
# Create the train dataset in pandas by reading the csv
# Set the parsing of the datetime column so you can use some of the `dt` features in pandas later
train = pd.read_csv('/content/train.csv')
train.head()
| datetime | season | holiday | workingday | weather | temp | atemp | humidity | windspeed | casual | registered | count | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 2011-01-01 00:00:00 | 1 | 0 | 0 | 1 | 9.84 | 14.395 | 81 | 0.0 | 3 | 13 | 16 |
| 1 | 2011-01-01 01:00:00 | 1 | 0 | 0 | 1 | 9.02 | 13.635 | 80 | 0.0 | 8 | 32 | 40 |
| 2 | 2011-01-01 02:00:00 | 1 | 0 | 0 | 1 | 9.02 | 13.635 | 80 | 0.0 | 5 | 27 | 32 |
| 3 | 2011-01-01 03:00:00 | 1 | 0 | 0 | 1 | 9.84 | 14.395 | 75 | 0.0 | 3 | 10 | 13 |
| 4 | 2011-01-01 04:00:00 | 1 | 0 | 0 | 1 | 9.84 | 14.395 | 75 | 0.0 | 0 | 1 | 1 |
# Simple output of the train dataset to view some of the min/max/varition of the dataset features.
# Create the test pandas dataframe in pandas by reading the csv, remember to parse the datetime!
test = pd.read_csv('/content/test.csv')
test.head()
| datetime | season | holiday | workingday | weather | temp | atemp | humidity | windspeed | |
|---|---|---|---|---|---|---|---|---|---|
| 0 | 2011-01-20 00:00:00 | 1 | 0 | 1 | 1 | 10.66 | 11.365 | 56 | 26.0027 |
| 1 | 2011-01-20 01:00:00 | 1 | 0 | 1 | 1 | 10.66 | 13.635 | 56 | 0.0000 |
| 2 | 2011-01-20 02:00:00 | 1 | 0 | 1 | 1 | 10.66 | 13.635 | 56 | 0.0000 |
| 3 | 2011-01-20 03:00:00 | 1 | 0 | 1 | 1 | 10.66 | 12.880 | 56 | 11.0014 |
| 4 | 2011-01-20 04:00:00 | 1 | 0 | 1 | 1 | 10.66 | 12.880 | 56 | 11.0014 |
# Same thing as train and test dataset
submission = pd.read_csv('/content/sampleSubmission.csv')
submission.head()
| datetime | count | |
|---|---|---|
| 0 | 2011-01-20 00:00:00 | 0 |
| 1 | 2011-01-20 01:00:00 | 0 |
| 2 | 2011-01-20 02:00:00 | 0 |
| 3 | 2011-01-20 03:00:00 | 0 |
| 4 | 2011-01-20 04:00:00 | 0 |
Requirements:
count, so it is the label we are setting.casual and registered columns as they are also not present in the test dataset. root_mean_squared_error as the metric to use for evaluation.best_quality to focus on creating the best model.predictor = TabularPredictor(
label="count", problem_type="regression", eval_metric="rmse"
).fit(
train_data=train.drop(['casual', 'registered'], axis=1),
time_limit=600,
presets='best_quality')
No path specified. Models will be saved in: "AutogluonModels/ag-20220112_200321/"
Presets specified: ['best_quality']
Beginning AutoGluon training ... Time limit = 600s
AutoGluon will save models to "AutogluonModels/ag-20220112_200321/"
AutoGluon Version: 0.3.1
Train Data Rows: 10886
Train Data Columns: 9
Preprocessing data ...
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 12652.29 MB
Train Data (Original) Memory Usage: 1.52 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Note: Converting 2 features to boolean dtype as they only contain 2 unique values.
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Fitting DatetimeFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('float', []) : 3 | ['temp', 'atemp', 'windspeed']
('int', []) : 5 | ['season', 'holiday', 'workingday', 'weather', 'humidity']
('object', ['datetime_as_object']) : 1 | ['datetime']
Types of features in processed data (raw dtype, special dtypes):
('float', []) : 3 | ['temp', 'atemp', 'windspeed']
('int', []) : 3 | ['season', 'weather', 'humidity']
('int', ['bool']) : 2 | ['holiday', 'workingday']
('int', ['datetime_as_int']) : 1 | ['datetime']
0.2s = Fit runtime
9 features in original data used to generate 9 features in processed data.
Train Data (Processed) Memory Usage: 0.63 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.21s ...
AutoGluon will gauge predictive performance using evaluation metric: 'root_mean_squared_error'
To change this, specify the eval_metric argument of fit()
AutoGluon will fit 2 stack levels (L1 to L2) ...
Fitting 11 L1 models ...
Fitting model: KNeighborsUnif_BAG_L1 ... Training model for up to 399.75s of the 599.78s of remaining time.
-160.4129 = Validation score (root_mean_squared_error)
0.03s = Training runtime
0.1s = Validation runtime
Fitting model: KNeighborsDist_BAG_L1 ... Training model for up to 399.48s of the 599.5s of remaining time.
-169.552 = Validation score (root_mean_squared_error)
0.03s = Training runtime
0.1s = Validation runtime
Fitting model: LightGBMXT_BAG_L1 ... Training model for up to 399.22s of the 599.24s of remaining time.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 123.806 valid_set's rmse: 134.369 [2000] train_set's rmse: 117.412 valid_set's rmse: 133.705
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 123.42 valid_set's rmse: 141.5
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 125.097 valid_set's rmse: 128.797 [2000] train_set's rmse: 119.003 valid_set's rmse: 127.909 [3000] train_set's rmse: 114.63 valid_set's rmse: 127.431 [4000] train_set's rmse: 111.295 valid_set's rmse: 126.943 [5000] train_set's rmse: 108.576 valid_set's rmse: 126.844 [6000] train_set's rmse: 106.085 valid_set's rmse: 126.71
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 124.131 valid_set's rmse: 138.303 [2000] train_set's rmse: 117.833 valid_set's rmse: 137.535
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 124.871 valid_set's rmse: 128.052 [2000] train_set's rmse: 118.547 valid_set's rmse: 127.003 [3000] train_set's rmse: 114.124 valid_set's rmse: 126.834 [4000] train_set's rmse: 110.645 valid_set's rmse: 126.694
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 124.654 valid_set's rmse: 135.095 [2000] train_set's rmse: 118.764 valid_set's rmse: 133.849 [3000] train_set's rmse: 114.615 valid_set's rmse: 133.509 [4000] train_set's rmse: 111.249 valid_set's rmse: 133.472 [5000] train_set's rmse: 108.455 valid_set's rmse: 133.281
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 124.01 valid_set's rmse: 139.355 [2000] train_set's rmse: 117.819 valid_set's rmse: 138.312 [3000] train_set's rmse: 113.4 valid_set's rmse: 137.886 [4000] train_set's rmse: 110.032 valid_set's rmse: 137.758
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 124.47 valid_set's rmse: 135.412
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 124.18 valid_set's rmse: 137.579 [2000] train_set's rmse: 117.782 valid_set's rmse: 136.664 [3000] train_set's rmse: 113.447 valid_set's rmse: 136.246 [4000] train_set's rmse: 109.982 valid_set's rmse: 136.227
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 124.641 valid_set's rmse: 132.484
-134.0883 = Validation score (root_mean_squared_error)
49.65s = Training runtime
2.28s = Validation runtime
Fitting model: LightGBM_BAG_L1 ... Training model for up to 341.59s of the 541.61s of remaining time.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 96.0217 valid_set's rmse: 123.249
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 94.7479 valid_set's rmse: 135.635
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 95.3055 valid_set's rmse: 132.087
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 95.1635 valid_set's rmse: 131.521
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-132.2864 = Validation score (root_mean_squared_error)
13.14s = Training runtime
0.52s = Validation runtime
Fitting model: RandomForestMSE_BAG_L1 ... Training model for up to 326.37s of the 526.39s of remaining time.
-118.4567 = Validation score (root_mean_squared_error)
7.78s = Training runtime
0.45s = Validation runtime
Fitting model: CatBoost_BAG_L1 ... Training model for up to 317.29s of the 517.31s of remaining time.
-132.3497 = Validation score (root_mean_squared_error)
59.46s = Training runtime
0.05s = Validation runtime
Fitting model: ExtraTreesMSE_BAG_L1 ... Training model for up to 257.67s of the 457.69s of remaining time.
-128.7334 = Validation score (root_mean_squared_error)
3.45s = Training runtime
0.45s = Validation runtime
Fitting model: NeuralNetFastAI_BAG_L1 ... Training model for up to 253.02s of the 453.04s of remaining time.
Ran out of time, stopping training early. (Stopping on epoch 0)
-139.0758 = Validation score (root_mean_squared_error)
138.86s = Training runtime
0.35s = Validation runtime
Fitting model: XGBoost_BAG_L1 ... Training model for up to 113.56s of the 313.58s of remaining time.
-132.3085 = Validation score (root_mean_squared_error)
16.22s = Training runtime
0.17s = Validation runtime
Fitting model: NeuralNetMXNet_BAG_L1 ... Training model for up to 95.91s of the 295.94s of remaining time.
Ran out of time, stopping training early. (Stopping on epoch 5)
Ran out of time, stopping training early. (Stopping on epoch 7)
Ran out of time, stopping training early. (Stopping on epoch 7)
Ran out of time, stopping training early. (Stopping on epoch 7)
Ran out of time, stopping training early. (Stopping on epoch 8)
Ran out of time, stopping training early. (Stopping on epoch 8)
Ran out of time, stopping training early. (Stopping on epoch 9)
Ran out of time, stopping training early. (Stopping on epoch 10)
Ran out of time, stopping training early. (Stopping on epoch 11)
Ran out of time, stopping training early. (Stopping on epoch 13)
-143.8615 = Validation score (root_mean_squared_error)
90.62s = Training runtime
1.89s = Validation runtime
Fitting model: LightGBMLarge_BAG_L1 ... Training model for up to 3.31s of the 203.34s of remaining time.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 32. Best iteration is:
[32] train_set's rmse: 135.773 valid_set's rmse: 142.415
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 34. Best iteration is:
[34] train_set's rmse: 133.577 valid_set's rmse: 150.619
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 38. Best iteration is:
[38] train_set's rmse: 132.469 valid_set's rmse: 137.168
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 39. Best iteration is:
[39] train_set's rmse: 130.422 valid_set's rmse: 149.95
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 39. Best iteration is:
[39] train_set's rmse: 132.315 valid_set's rmse: 132.794
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 44. Best iteration is:
[44] train_set's rmse: 128.331 valid_set's rmse: 138.312
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 48. Best iteration is:
[48] train_set's rmse: 126.23 valid_set's rmse: 140.746
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 44. Best iteration is:
[44] train_set's rmse: 128.364 valid_set's rmse: 140.196
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 42. Best iteration is:
[42] train_set's rmse: 128.892 valid_set's rmse: 142.003
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 46. Best iteration is:
[46] train_set's rmse: 127.382 valid_set's rmse: 136.834
-141.2043 = Validation score (root_mean_squared_error)
3.05s = Training runtime
0.07s = Validation runtime
Completed 1/20 k-fold bagging repeats ...
Fitting model: WeightedEnsemble_L2 ... Training model for up to 360.0s of the 199.99s of remaining time.
-118.4411 = Validation score (root_mean_squared_error)
0.66s = Training runtime
0.0s = Validation runtime
Fitting 9 L2 models ...
Fitting model: LightGBMXT_BAG_L2 ... Training model for up to 199.3s of the 199.28s of remaining time.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-115.6916 = Validation score (root_mean_squared_error)
13.56s = Training runtime
0.37s = Validation runtime
Fitting model: LightGBM_BAG_L2 ... Training model for up to 184.62s of the 184.6s of remaining time.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-116.7877 = Validation score (root_mean_squared_error)
8.34s = Training runtime
0.09s = Validation runtime
Fitting model: RandomForestMSE_BAG_L2 ... Training model for up to 176.01s of the 175.99s of remaining time.
-118.9996 = Validation score (root_mean_squared_error)
35.12s = Training runtime
0.59s = Validation runtime
Fitting model: CatBoost_BAG_L2 ... Training model for up to 139.32s of the 139.3s of remaining time.
-116.2641 = Validation score (root_mean_squared_error)
32.69s = Training runtime
0.04s = Validation runtime
Fitting model: ExtraTreesMSE_BAG_L2 ... Training model for up to 106.51s of the 106.48s of remaining time.
-117.5309 = Validation score (root_mean_squared_error)
7.77s = Training runtime
0.57s = Validation runtime
Fitting model: NeuralNetFastAI_BAG_L2 ... Training model for up to 97.29s of the 97.27s of remaining time.
Ran out of time, stopping training early. (Stopping on epoch 16)
Ran out of time, stopping training early. (Stopping on epoch 17)
Ran out of time, stopping training early. (Stopping on epoch 17)
Ran out of time, stopping training early. (Stopping on epoch 18)
Ran out of time, stopping training early. (Stopping on epoch 19)
Ran out of time, stopping training early. (Stopping on epoch 20)
Ran out of time, stopping training early. (Stopping on epoch 20)
Ran out of time, stopping training early. (Stopping on epoch 22)
Ran out of time, stopping training early. (Stopping on epoch 25)
-115.2609 = Validation score (root_mean_squared_error)
92.89s = Training runtime
0.4s = Validation runtime
Fitting model: XGBoost_BAG_L2 ... Training model for up to 3.72s of the 3.7s of remaining time.
-118.9162 = Validation score (root_mean_squared_error)
3.52s = Training runtime
0.08s = Validation runtime
Fitting model: NeuralNetMXNet_BAG_L2 ... Training model for up to 0.0s of the -0.02s of remaining time.
Time limit exceeded... Skipping NeuralNetMXNet_BAG_L2.
Completed 1/20 k-fold bagging repeats ...
Fitting model: WeightedEnsemble_L3 ... Training model for up to 360.0s of the -0.59s of remaining time.
-114.634 = Validation score (root_mean_squared_error)
0.44s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 601.09s ...
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("AutogluonModels/ag-20220112_200321/")
predictor.fit_summary()
*** Summary of fit() ***
Estimated performance of each model:
model score_val pred_time_val fit_time pred_time_val_marginal fit_time_marginal stack_level can_infer fit_order
0 WeightedEnsemble_L3 -114.633955 7.914121 537.970252 0.000742 0.442141 3 True 20
1 NeuralNetFastAI_BAG_L2 -115.260912 6.837862 475.174306 0.401848 92.892406 2 True 18
2 LightGBMXT_BAG_L2 -115.691557 6.802000 395.838906 0.365986 13.557006 2 True 13
3 CatBoost_BAG_L2 -116.264149 6.478003 414.976249 0.041988 32.694349 2 True 16
4 LightGBM_BAG_L2 -116.787704 6.530967 390.618613 0.094953 8.336713 2 True 14
5 ExtraTreesMSE_BAG_L2 -117.530917 7.008604 390.047636 0.572589 7.765736 2 True 17
6 WeightedEnsemble_L2 -118.441149 2.340055 99.062346 0.001056 0.663428 2 True 12
7 RandomForestMSE_BAG_L1 -118.456660 0.448693 7.776116 0.448693 7.776116 1 True 5
8 XGBoost_BAG_L2 -118.916190 6.511620 385.806452 0.075605 3.524552 2 True 19
9 RandomForestMSE_BAG_L2 -118.999586 7.025372 417.402004 0.589357 35.120105 2 True 15
10 ExtraTreesMSE_BAG_L1 -128.733445 0.449261 3.449869 0.449261 3.449869 1 True 7
11 LightGBM_BAG_L1 -132.286410 0.524606 13.144010 0.524606 13.144010 1 True 4
12 XGBoost_BAG_L1 -132.308522 0.168591 16.215636 0.168591 16.215636 1 True 9
13 CatBoost_BAG_L1 -132.349692 0.051857 59.459960 0.051857 59.459960 1 True 6
14 LightGBMXT_BAG_L1 -134.088334 2.277713 49.649567 2.277713 49.649567 1 True 3
15 NeuralNetFastAI_BAG_L1 -139.075752 0.351083 138.861451 0.351083 138.861451 1 True 8
16 LightGBMLarge_BAG_L1 -141.204292 0.066056 3.049310 0.066056 3.049310 1 True 11
17 NeuralNetMXNet_BAG_L1 -143.861522 1.890306 90.622802 1.890306 90.622802 1 True 10
18 KNeighborsUnif_BAG_L1 -160.412950 0.103298 0.027282 0.103298 0.027282 1 True 1
19 KNeighborsDist_BAG_L1 -169.551983 0.104550 0.025897 0.104550 0.025897 1 True 2
Number of models trained: 20
Types of models trained:
{'StackerEnsembleModel_RF', 'WeightedEnsembleModel', 'StackerEnsembleModel_XGBoost', 'StackerEnsembleModel_KNN', 'StackerEnsembleModel_NNFastAiTabular', 'StackerEnsembleModel_TabularNeuralNet', 'StackerEnsembleModel_LGB', 'StackerEnsembleModel_CatBoost', 'StackerEnsembleModel_XT'}
Bagging used: True (with 10 folds)
Multi-layer stack-ensembling used: True (with 3 levels)
Feature Metadata (Processed):
(raw dtype, special dtypes):
('float', []) : 3 | ['temp', 'atemp', 'windspeed']
('int', []) : 3 | ['season', 'weather', 'humidity']
('int', ['bool']) : 2 | ['holiday', 'workingday']
('int', ['datetime_as_int']) : 1 | ['datetime']
Plot summary of models saved to file: AutogluonModels/ag-20220112_200321/SummaryOfModels.html
*** End of fit() summary ***
{'leaderboard': model score_val ... can_infer fit_order
0 WeightedEnsemble_L3 -114.633955 ... True 20
1 NeuralNetFastAI_BAG_L2 -115.260912 ... True 18
2 LightGBMXT_BAG_L2 -115.691557 ... True 13
3 CatBoost_BAG_L2 -116.264149 ... True 16
4 LightGBM_BAG_L2 -116.787704 ... True 14
5 ExtraTreesMSE_BAG_L2 -117.530917 ... True 17
6 WeightedEnsemble_L2 -118.441149 ... True 12
7 RandomForestMSE_BAG_L1 -118.456660 ... True 5
8 XGBoost_BAG_L2 -118.916190 ... True 19
9 RandomForestMSE_BAG_L2 -118.999586 ... True 15
10 ExtraTreesMSE_BAG_L1 -128.733445 ... True 7
11 LightGBM_BAG_L1 -132.286410 ... True 4
12 XGBoost_BAG_L1 -132.308522 ... True 9
13 CatBoost_BAG_L1 -132.349692 ... True 6
14 LightGBMXT_BAG_L1 -134.088334 ... True 3
15 NeuralNetFastAI_BAG_L1 -139.075752 ... True 8
16 LightGBMLarge_BAG_L1 -141.204292 ... True 11
17 NeuralNetMXNet_BAG_L1 -143.861522 ... True 10
18 KNeighborsUnif_BAG_L1 -160.412950 ... True 1
19 KNeighborsDist_BAG_L1 -169.551983 ... True 2
[20 rows x 9 columns],
'max_stack_level': 3,
'model_best': 'WeightedEnsemble_L3',
'model_fit_times': {'CatBoost_BAG_L1': 59.459959983825684,
'CatBoost_BAG_L2': 32.69434928894043,
'ExtraTreesMSE_BAG_L1': 3.449869155883789,
'ExtraTreesMSE_BAG_L2': 7.76573634147644,
'KNeighborsDist_BAG_L1': 0.025896787643432617,
'KNeighborsUnif_BAG_L1': 0.0272824764251709,
'LightGBMLarge_BAG_L1': 3.049309730529785,
'LightGBMXT_BAG_L1': 49.64956712722778,
'LightGBMXT_BAG_L2': 13.557006120681763,
'LightGBM_BAG_L1': 13.144009828567505,
'LightGBM_BAG_L2': 8.336713075637817,
'NeuralNetFastAI_BAG_L1': 138.86145114898682,
'NeuralNetFastAI_BAG_L2': 92.89240646362305,
'NeuralNetMXNet_BAG_L1': 90.62280178070068,
'RandomForestMSE_BAG_L1': 7.776115894317627,
'RandomForestMSE_BAG_L2': 35.12010478973389,
'WeightedEnsemble_L2': 0.6634280681610107,
'WeightedEnsemble_L3': 0.4421408176422119,
'XGBoost_BAG_L1': 16.215635776519775,
'XGBoost_BAG_L2': 3.5245518684387207},
'model_hyperparams': {'CatBoost_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'CatBoost_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'ExtraTreesMSE_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'ExtraTreesMSE_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'KNeighborsDist_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'KNeighborsUnif_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'LightGBMLarge_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBMXT_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBMXT_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBM_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBM_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'NeuralNetFastAI_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'NeuralNetFastAI_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'NeuralNetMXNet_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'RandomForestMSE_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'RandomForestMSE_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'WeightedEnsemble_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': False},
'WeightedEnsemble_L3': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': False},
'XGBoost_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'XGBoost_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True}},
'model_paths': {'CatBoost_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/CatBoost_BAG_L1/',
'CatBoost_BAG_L2': 'AutogluonModels/ag-20220112_200321/models/CatBoost_BAG_L2/',
'ExtraTreesMSE_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/ExtraTreesMSE_BAG_L1/',
'ExtraTreesMSE_BAG_L2': 'AutogluonModels/ag-20220112_200321/models/ExtraTreesMSE_BAG_L2/',
'KNeighborsDist_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/KNeighborsDist_BAG_L1/',
'KNeighborsUnif_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/KNeighborsUnif_BAG_L1/',
'LightGBMLarge_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/LightGBMLarge_BAG_L1/',
'LightGBMXT_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/LightGBMXT_BAG_L1/',
'LightGBMXT_BAG_L2': 'AutogluonModels/ag-20220112_200321/models/LightGBMXT_BAG_L2/',
'LightGBM_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/LightGBM_BAG_L1/',
'LightGBM_BAG_L2': 'AutogluonModels/ag-20220112_200321/models/LightGBM_BAG_L2/',
'NeuralNetFastAI_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/NeuralNetFastAI_BAG_L1/',
'NeuralNetFastAI_BAG_L2': 'AutogluonModels/ag-20220112_200321/models/NeuralNetFastAI_BAG_L2/',
'NeuralNetMXNet_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/NeuralNetMXNet_BAG_L1/',
'RandomForestMSE_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/RandomForestMSE_BAG_L1/',
'RandomForestMSE_BAG_L2': 'AutogluonModels/ag-20220112_200321/models/RandomForestMSE_BAG_L2/',
'WeightedEnsemble_L2': 'AutogluonModels/ag-20220112_200321/models/WeightedEnsemble_L2/',
'WeightedEnsemble_L3': 'AutogluonModels/ag-20220112_200321/models/WeightedEnsemble_L3/',
'XGBoost_BAG_L1': 'AutogluonModels/ag-20220112_200321/models/XGBoost_BAG_L1/',
'XGBoost_BAG_L2': 'AutogluonModels/ag-20220112_200321/models/XGBoost_BAG_L2/'},
'model_performance': {'CatBoost_BAG_L1': -132.3496915713461,
'CatBoost_BAG_L2': -116.26414910328883,
'ExtraTreesMSE_BAG_L1': -128.73344506216944,
'ExtraTreesMSE_BAG_L2': -117.53091738599062,
'KNeighborsDist_BAG_L1': -169.55198317920082,
'KNeighborsUnif_BAG_L1': -160.41294976754526,
'LightGBMLarge_BAG_L1': -141.20429182890052,
'LightGBMXT_BAG_L1': -134.08833420116997,
'LightGBMXT_BAG_L2': -115.69155732049558,
'LightGBM_BAG_L1': -132.28640961443512,
'LightGBM_BAG_L2': -116.78770350724069,
'NeuralNetFastAI_BAG_L1': -139.07575183789987,
'NeuralNetFastAI_BAG_L2': -115.26091196925515,
'NeuralNetMXNet_BAG_L1': -143.8615219450287,
'RandomForestMSE_BAG_L1': -118.45666016795751,
'RandomForestMSE_BAG_L2': -118.99958602698366,
'WeightedEnsemble_L2': -118.44114894249657,
'WeightedEnsemble_L3': -114.63395463170856,
'XGBoost_BAG_L1': -132.30852184526844,
'XGBoost_BAG_L2': -118.91618952210548},
'model_pred_times': {'CatBoost_BAG_L1': 0.05185651779174805,
'CatBoost_BAG_L2': 0.04198813438415527,
'ExtraTreesMSE_BAG_L1': 0.4492614269256592,
'ExtraTreesMSE_BAG_L2': 0.5725893974304199,
'KNeighborsDist_BAG_L1': 0.10454988479614258,
'KNeighborsUnif_BAG_L1': 0.10329771041870117,
'LightGBMLarge_BAG_L1': 0.0660555362701416,
'LightGBMXT_BAG_L1': 2.2777132987976074,
'LightGBMXT_BAG_L2': 0.3659858703613281,
'LightGBM_BAG_L1': 0.524606466293335,
'LightGBM_BAG_L2': 0.09495306015014648,
'NeuralNetFastAI_BAG_L1': 0.35108304023742676,
'NeuralNetFastAI_BAG_L2': 0.40184783935546875,
'NeuralNetMXNet_BAG_L1': 1.890305995941162,
'RandomForestMSE_BAG_L1': 0.44869303703308105,
'RandomForestMSE_BAG_L2': 0.5893571376800537,
'WeightedEnsemble_L2': 0.0010559558868408203,
'WeightedEnsemble_L3': 0.0007419586181640625,
'XGBoost_BAG_L1': 0.16859149932861328,
'XGBoost_BAG_L2': 0.07560515403747559},
'model_types': {'CatBoost_BAG_L1': 'StackerEnsembleModel_CatBoost',
'CatBoost_BAG_L2': 'StackerEnsembleModel_CatBoost',
'ExtraTreesMSE_BAG_L1': 'StackerEnsembleModel_XT',
'ExtraTreesMSE_BAG_L2': 'StackerEnsembleModel_XT',
'KNeighborsDist_BAG_L1': 'StackerEnsembleModel_KNN',
'KNeighborsUnif_BAG_L1': 'StackerEnsembleModel_KNN',
'LightGBMLarge_BAG_L1': 'StackerEnsembleModel_LGB',
'LightGBMXT_BAG_L1': 'StackerEnsembleModel_LGB',
'LightGBMXT_BAG_L2': 'StackerEnsembleModel_LGB',
'LightGBM_BAG_L1': 'StackerEnsembleModel_LGB',
'LightGBM_BAG_L2': 'StackerEnsembleModel_LGB',
'NeuralNetFastAI_BAG_L1': 'StackerEnsembleModel_NNFastAiTabular',
'NeuralNetFastAI_BAG_L2': 'StackerEnsembleModel_NNFastAiTabular',
'NeuralNetMXNet_BAG_L1': 'StackerEnsembleModel_TabularNeuralNet',
'RandomForestMSE_BAG_L1': 'StackerEnsembleModel_RF',
'RandomForestMSE_BAG_L2': 'StackerEnsembleModel_RF',
'WeightedEnsemble_L2': 'WeightedEnsembleModel',
'WeightedEnsemble_L3': 'WeightedEnsembleModel',
'XGBoost_BAG_L1': 'StackerEnsembleModel_XGBoost',
'XGBoost_BAG_L2': 'StackerEnsembleModel_XGBoost'},
'num_bag_folds': 10}
predictions = predictor.predict(test)
predictions = {'datetime': test['datetime'], 'Pred_count': predictions}
predictions = pd.DataFrame(data=predictions)
predictions.head()
| datetime | Pred_count | |
|---|---|---|
| 0 | 2011-01-20 00:00:00 | 97.191711 |
| 1 | 2011-01-20 01:00:00 | 96.002457 |
| 2 | 2011-01-20 02:00:00 | 96.004128 |
| 3 | 2011-01-20 03:00:00 | 108.303314 |
| 4 | 2011-01-20 04:00:00 | 108.164223 |
# Describe the `predictions` series to see if there are any negative values
predictions.describe()
| Pred_count | |
|---|---|
| count | 6493.000000 |
| mean | 211.974594 |
| std | 128.192108 |
| min | -17.967422 |
| 25% | 108.682953 |
| 50% | 194.932404 |
| 75% | 297.041199 |
| max | 665.298035 |
# How many negative values do we have?
neg = predictions.groupby(predictions['Pred_count'])
# lambda function
def minus(val):
return val[val < 0].sum()
print(neg['Pred_count'].agg([('negcount', minus)]))
negTemp Pred_count -17.967422 -17.967422 -16.763763 -16.763763 -11.901270 -11.901270 2.954704 0.000000 7.204733 0.000000 ... ... 649.348022 0.000000 653.048584 0.000000 654.244263 0.000000 657.741089 0.000000 665.298035 0.000000 [6490 rows x 1 columns]
# Set them to zero
predictions[predictions['Pred_count']<0] = 0
predictions.describe()
| Pred_count | |
|---|---|
| count | 6493.000000 |
| mean | 211.981750 |
| std | 128.179794 |
| min | 0.000000 |
| 25% | 108.682953 |
| 50% | 194.932404 |
| 75% | 297.041199 |
| max | 665.298035 |
predictions.head()
| datetime | Pred_count | |
|---|---|---|
| 0 | 2011-01-20 00:00:00 | 97.191711 |
| 1 | 2011-01-20 01:00:00 | 96.002457 |
| 2 | 2011-01-20 02:00:00 | 96.004128 |
| 3 | 2011-01-20 03:00:00 | 108.303314 |
| 4 | 2011-01-20 04:00:00 | 108.164223 |
submission["count"] = predictions['Pred_count']
submission.to_csv("submission.csv", index=False)
!kaggle competitions submit -c bike-sharing-demand -f submission.csv -m "first raw submission"
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4) 100% 188k/188k [00:04<00:00, 45.7kB/s] Successfully submitted to Bike Sharing Demand
My Submissions¶!kaggle competitions submissions -c bike-sharing-demand | tail -n +1 | head -n 6
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4) fileName date description status publicScore privateScore -------------- ------------------- -------------------- -------- ----------- ------------ submission.csv 2022-01-12 20:46:44 first raw submission complete 1.39920 1.39920
# Create a histogram of all features to show the distribution of each one relative to the data. This is part of the exploritory data analysis
train.hist()
array([[<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe2944890>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe2843890>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe0c10e90>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe248b4d0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe0bd3ad0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe0c46110>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe0c1b790>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe3a34cd0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe3a34d10>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe39ed450>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe394ff50>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe398d590>]],
dtype=object)
# create a new feature
train.loc[:, "datetime"] = pd.to_datetime(train.loc[:, "datetime"])
test.loc[:, "datetime"] = pd.to_datetime(test.loc[:, "datetime"])
train['year'] = train['datetime'].dt.year
train['month'] = train['datetime'].dt.month
train['day'] = train['datetime'].dt.day
train['hour'] = train['datetime'].dt.hour
test['year'] = test['datetime'].dt.year
test['month'] = test['datetime'].dt.month
test['day'] = test['datetime'].dt.day
test['hour'] = test['datetime'].dt.hour
train["season"] = train["season"].astype("category")
train["weather"] = train["weather"].astype("category")
test["season"] = test["season"].astype("category")
test["weather"] = test["weather"].astype("category")
# View are new feature
train.head()
| datetime | season | holiday | workingday | weather | temp | atemp | humidity | windspeed | casual | registered | count | year | month | day | hour | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 2011-01-01 00:00:00 | 1 | 0 | 0 | 1 | 9.84 | 14.395 | 81 | 0.0 | 3 | 13 | 16 | 2011 | 1 | 1 | 0 |
| 1 | 2011-01-01 01:00:00 | 1 | 0 | 0 | 1 | 9.02 | 13.635 | 80 | 0.0 | 8 | 32 | 40 | 2011 | 1 | 1 | 1 |
| 2 | 2011-01-01 02:00:00 | 1 | 0 | 0 | 1 | 9.02 | 13.635 | 80 | 0.0 | 5 | 27 | 32 | 2011 | 1 | 1 | 2 |
| 3 | 2011-01-01 03:00:00 | 1 | 0 | 0 | 1 | 9.84 | 14.395 | 75 | 0.0 | 3 | 10 | 13 | 2011 | 1 | 1 | 3 |
| 4 | 2011-01-01 04:00:00 | 1 | 0 | 0 | 1 | 9.84 | 14.395 | 75 | 0.0 | 0 | 1 | 1 | 2011 | 1 | 1 | 4 |
# View histogram of all features again now with the hour feature
train.hist()
array([[<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe2442bd0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe23e9d10>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe239f190>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe23d3690>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe2384b90>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe23420d0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe22f6650>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe22a7a90>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe22a7ad0>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe2264110>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe224ba10>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe21fef10>],
[<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe21bd450>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe216f950>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe2122e50>,
<matplotlib.axes._subplots.AxesSubplot object at 0x7efbe20e2390>]],
dtype=object)
predictor_new_features = TabularPredictor(
label="count", problem_type="regression", eval_metric="rmse"
).fit(
train_data=train.drop(['casual', 'registered'], axis=1),
time_limit=600,
presets='best_quality')
No path specified. Models will be saved in: "AutogluonModels/ag-20220112_212155/"
Presets specified: ['best_quality']
Beginning AutoGluon training ... Time limit = 600s
AutoGluon will save models to "AutogluonModels/ag-20220112_212155/"
AutoGluon Version: 0.3.1
Train Data Rows: 10886
Train Data Columns: 13
Preprocessing data ...
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 11488.61 MB
Train Data (Original) Memory Usage: 0.98 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Note: Converting 3 features to boolean dtype as they only contain 2 unique values.
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Fitting CategoryFeatureGenerator...
Fitting CategoryMemoryMinimizeFeatureGenerator...
Fitting DatetimeFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('category', []) : 2 | ['season', 'weather']
('datetime', []) : 1 | ['datetime']
('float', []) : 3 | ['temp', 'atemp', 'windspeed']
('int', []) : 7 | ['holiday', 'workingday', 'humidity', 'year', 'month', ...]
Types of features in processed data (raw dtype, special dtypes):
('category', []) : 2 | ['season', 'weather']
('float', []) : 3 | ['temp', 'atemp', 'windspeed']
('int', []) : 4 | ['humidity', 'month', 'day', 'hour']
('int', ['bool']) : 3 | ['holiday', 'workingday', 'year']
('int', ['datetime_as_int']) : 1 | ['datetime']
0.4s = Fit runtime
13 features in original data used to generate 13 features in processed data.
Train Data (Processed) Memory Usage: 0.75 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.44s ...
AutoGluon will gauge predictive performance using evaluation metric: 'root_mean_squared_error'
To change this, specify the eval_metric argument of fit()
AutoGluon will fit 2 stack levels (L1 to L2) ...
Fitting 11 L1 models ...
Fitting model: KNeighborsUnif_BAG_L1 ... Training model for up to 399.6s of the 599.55s of remaining time.
-123.9216 = Validation score (root_mean_squared_error)
0.04s = Training runtime
0.2s = Validation runtime
Fitting model: KNeighborsDist_BAG_L1 ... Training model for up to 399.22s of the 599.17s of remaining time.
-119.3726 = Validation score (root_mean_squared_error)
0.03s = Training runtime
0.2s = Validation runtime
Fitting model: LightGBMXT_BAG_L1 ... Training model for up to 398.86s of the 598.8s of remaining time.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 30.8186 valid_set's rmse: 38.4291 [2000] train_set's rmse: 25.4239 valid_set's rmse: 36.1393 [3000] train_set's rmse: 22.5686 valid_set's rmse: 35.6903 [4000] train_set's rmse: 20.5877 valid_set's rmse: 35.5711 [5000] train_set's rmse: 19.0435 valid_set's rmse: 35.5486
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 30.3019 valid_set's rmse: 41.0917 [2000] train_set's rmse: 25.4089 valid_set's rmse: 38.7923 [3000] train_set's rmse: 22.682 valid_set's rmse: 38.0445 [4000] train_set's rmse: 20.7477 valid_set's rmse: 37.6701 [5000] train_set's rmse: 19.2301 valid_set's rmse: 37.5143 [6000] train_set's rmse: 18.0161 valid_set's rmse: 37.3559 [7000] train_set's rmse: 16.9715 valid_set's rmse: 37.2906 [8000] train_set's rmse: 16.0611 valid_set's rmse: 37.2206 [9000] train_set's rmse: 15.2683 valid_set's rmse: 37.1952 [10000] train_set's rmse: 14.5678 valid_set's rmse: 37.1924
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 30.8968 valid_set's rmse: 35.5721 [2000] train_set's rmse: 25.8631 valid_set's rmse: 34.288 [3000] train_set's rmse: 23.0637 valid_set's rmse: 34.1613
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 30.1963 valid_set's rmse: 41.0073 [2000] train_set's rmse: 25.1657 valid_set's rmse: 39.5208 [3000] train_set's rmse: 22.4714 valid_set's rmse: 38.891 [4000] train_set's rmse: 20.6602 valid_set's rmse: 38.6141 [5000] train_set's rmse: 19.2024 valid_set's rmse: 38.3924 [6000] train_set's rmse: 18.0244 valid_set's rmse: 38.232 [7000] train_set's rmse: 16.9892 valid_set's rmse: 38.0679 [8000] train_set's rmse: 16.0734 valid_set's rmse: 37.9926 [9000] train_set's rmse: 15.292 valid_set's rmse: 37.9346 [10000] train_set's rmse: 14.5874 valid_set's rmse: 37.8583
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 30.7161 valid_set's rmse: 38.2675 [2000] train_set's rmse: 25.5034 valid_set's rmse: 36.7964 [3000] train_set's rmse: 22.7183 valid_set's rmse: 36.6083
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 30.3783 valid_set's rmse: 42.1922 [2000] train_set's rmse: 25.4027 valid_set's rmse: 40.825 [3000] train_set's rmse: 22.6044 valid_set's rmse: 40.6565 [4000] train_set's rmse: 20.6399 valid_set's rmse: 40.551
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 30.6785 valid_set's rmse: 38.6888 [2000] train_set's rmse: 25.6628 valid_set's rmse: 35.913 [3000] train_set's rmse: 22.8607 valid_set's rmse: 35.0407 [4000] train_set's rmse: 20.8954 valid_set's rmse: 34.7444 [5000] train_set's rmse: 19.3693 valid_set's rmse: 34.6106 [6000] train_set's rmse: 18.0937 valid_set's rmse: 34.5086 [7000] train_set's rmse: 17.0039 valid_set's rmse: 34.5219
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 30.7847 valid_set's rmse: 40.0936 [2000] train_set's rmse: 25.4609 valid_set's rmse: 38.335 [3000] train_set's rmse: 22.6799 valid_set's rmse: 37.7931 [4000] train_set's rmse: 20.8023 valid_set's rmse: 37.6372 [5000] train_set's rmse: 19.2946 valid_set's rmse: 37.4755 [6000] train_set's rmse: 18.0187 valid_set's rmse: 37.3536 [7000] train_set's rmse: 16.9566 valid_set's rmse: 37.3147 [8000] train_set's rmse: 16.0353 valid_set's rmse: 37.2951 [9000] train_set's rmse: 15.2295 valid_set's rmse: 37.2903 [10000] train_set's rmse: 14.5042 valid_set's rmse: 37.3442
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 30.2598 valid_set's rmse: 40.7273 [2000] train_set's rmse: 25.2682 valid_set's rmse: 39.8782 [3000] train_set's rmse: 22.5407 valid_set's rmse: 39.8429
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 31.0398 valid_set's rmse: 37.6854 [2000] train_set's rmse: 25.8317 valid_set's rmse: 35.4668 [3000] train_set's rmse: 23.0887 valid_set's rmse: 34.8136 [4000] train_set's rmse: 21.1229 valid_set's rmse: 34.5371 [5000] train_set's rmse: 19.5767 valid_set's rmse: 34.3685 [6000] train_set's rmse: 18.3516 valid_set's rmse: 34.2276 [7000] train_set's rmse: 17.2755 valid_set's rmse: 34.2726
-36.8251 = Validation score (root_mean_squared_error)
95.28s = Training runtime
5.1s = Validation runtime
Fitting model: LightGBM_BAG_L1 ... Training model for up to 286.77s of the 486.71s of remaining time.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 21.1363 valid_set's rmse: 35.556 [2000] train_set's rmse: 15.5229 valid_set's rmse: 35.4489
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 21.0431 valid_set's rmse: 36.9388 [2000] train_set's rmse: 15.5393 valid_set's rmse: 36.1223 [3000] train_set's rmse: 12.2068 valid_set's rmse: 35.9677 [4000] train_set's rmse: 9.97753 valid_set's rmse: 35.9639
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 21.4253 valid_set's rmse: 33.0295
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 21.4591 valid_set's rmse: 38.0987 [2000] train_set's rmse: 15.6967 valid_set's rmse: 37.1129 [3000] train_set's rmse: 12.2044 valid_set's rmse: 36.7212 [4000] train_set's rmse: 9.95476 valid_set's rmse: 36.6786
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 21.43 valid_set's rmse: 37.371 [2000] train_set's rmse: 15.5136 valid_set's rmse: 36.986
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 21.6619 valid_set's rmse: 35.9541
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 21.0051 valid_set's rmse: 35.6024 [2000] train_set's rmse: 15.4015 valid_set's rmse: 34.9824
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
[1000] train_set's rmse: 21.3287 valid_set's rmse: 36.3245 [2000] train_set's rmse: 15.6834 valid_set's rmse: 35.8282 [3000] train_set's rmse: 12.2992 valid_set's rmse: 35.5694 [4000] train_set's rmse: 10.1773 valid_set's rmse: 35.5318
-36.5863 = Validation score (root_mean_squared_error)
36.08s = Training runtime
1.63s = Validation runtime
Fitting model: RandomForestMSE_BAG_L1 ... Training model for up to 244.94s of the 444.88s of remaining time.
-41.3243 = Validation score (root_mean_squared_error)
10.98s = Training runtime
0.49s = Validation runtime
Fitting model: CatBoost_BAG_L1 ... Training model for up to 232.66s of the 432.6s of remaining time.
Time limit exceeded... Skipping CatBoost_BAG_L1.
Fitting model: ExtraTreesMSE_BAG_L1 ... Training model for up to 204.02s of the 403.96s of remaining time.
-41.0311 = Validation score (root_mean_squared_error)
4.46s = Training runtime
0.48s = Validation runtime
Fitting model: NeuralNetFastAI_BAG_L1 ... Training model for up to 198.23s of the 398.17s of remaining time.
-45.4168 = Validation score (root_mean_squared_error)
149.29s = Training runtime
0.42s = Validation runtime
Fitting model: XGBoost_BAG_L1 ... Training model for up to 48.24s of the 248.18s of remaining time.
-37.4255 = Validation score (root_mean_squared_error)
34.79s = Training runtime
0.38s = Validation runtime
Fitting model: NeuralNetMXNet_BAG_L1 ... Training model for up to 10.8s of the 210.75s of remaining time.
Time limit exceeded... Skipping NeuralNetMXNet_BAG_L1.
Fitting model: LightGBMLarge_BAG_L1 ... Training model for up to 10.13s of the 210.08s of remaining time.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Ran out of time, early stopping on iteration 163. Best iteration is:
[163] train_set's rmse: 24.9066 valid_set's rmse: 37.8032
Time limit exceeded... Skipping LightGBMLarge_BAG_L1.
Completed 1/20 k-fold bagging repeats ...
Fitting model: WeightedEnsemble_L2 ... Training model for up to 360.0s of the 208.94s of remaining time.
-35.1595 = Validation score (root_mean_squared_error)
0.48s = Training runtime
0.0s = Validation runtime
Fitting 9 L2 models ...
Fitting model: LightGBMXT_BAG_L2 ... Training model for up to 208.43s of the 208.41s of remaining time.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-36.0397 = Validation score (root_mean_squared_error)
10.3s = Training runtime
0.27s = Validation runtime
Fitting model: LightGBM_BAG_L2 ... Training model for up to 197.5s of the 197.48s of remaining time.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-35.5464 = Validation score (root_mean_squared_error)
8.45s = Training runtime
0.14s = Validation runtime
Fitting model: RandomForestMSE_BAG_L2 ... Training model for up to 188.73s of the 188.71s of remaining time.
-36.2541 = Validation score (root_mean_squared_error)
29.52s = Training runtime
0.58s = Validation runtime
Fitting model: CatBoost_BAG_L2 ... Training model for up to 157.85s of the 157.83s of remaining time.
-35.3413 = Validation score (root_mean_squared_error)
35.26s = Training runtime
0.08s = Validation runtime
Fitting model: ExtraTreesMSE_BAG_L2 ... Training model for up to 122.44s of the 122.43s of remaining time.
-35.5614 = Validation score (root_mean_squared_error)
7.99s = Training runtime
0.57s = Validation runtime
Fitting model: NeuralNetFastAI_BAG_L2 ... Training model for up to 113.12s of the 113.1s of remaining time.
Ran out of time, stopping training early. (Stopping on epoch 17)
Ran out of time, stopping training early. (Stopping on epoch 17)
Ran out of time, stopping training early. (Stopping on epoch 17)
Ran out of time, stopping training early. (Stopping on epoch 18)
Ran out of time, stopping training early. (Stopping on epoch 19)
Ran out of time, stopping training early. (Stopping on epoch 20)
Ran out of time, stopping training early. (Stopping on epoch 21)
Ran out of time, stopping training early. (Stopping on epoch 22)
Ran out of time, stopping training early. (Stopping on epoch 24)
-35.8248 = Validation score (root_mean_squared_error)
107.74s = Training runtime
0.46s = Validation runtime
Fitting model: XGBoost_BAG_L2 ... Training model for up to 4.65s of the 4.63s of remaining time.
-38.1653 = Validation score (root_mean_squared_error)
4.37s = Training runtime
0.12s = Validation runtime
Fitting model: NeuralNetMXNet_BAG_L2 ... Training model for up to 0.01s of the -0.01s of remaining time.
Time limit exceeded... Skipping NeuralNetMXNet_BAG_L2.
Completed 1/20 k-fold bagging repeats ...
Fitting model: WeightedEnsemble_L3 ... Training model for up to 360.0s of the -0.7s of remaining time.
-35.0295 = Validation score (root_mean_squared_error)
0.43s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 601.17s ...
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("AutogluonModels/ag-20220112_212155/")
predictor_new_features.fit_summary()
*** Summary of fit() ***
Estimated performance of each model:
model score_val pred_time_val fit_time pred_time_val_marginal fit_time_marginal stack_level can_infer fit_order
0 WeightedEnsemble_L3 -35.029454 10.155910 490.824291 0.000745 0.431925 3 True 17
1 WeightedEnsemble_L2 -35.159474 8.021527 326.907185 0.000749 0.483585 2 True 9
2 CatBoost_BAG_L2 -35.341252 8.986034 366.211876 0.077499 35.262077 2 True 13
3 LightGBM_BAG_L2 -35.546448 9.051810 339.401589 0.143275 8.451790 2 True 11
4 ExtraTreesMSE_BAG_L2 -35.561368 9.477804 338.935531 0.569269 7.985732 2 True 14
5 NeuralNetFastAI_BAG_L2 -35.824764 9.365122 438.692769 0.456587 107.742969 2 True 15
6 LightGBMXT_BAG_L2 -36.039664 9.182462 341.246400 0.273928 10.296601 2 True 10
7 RandomForestMSE_BAG_L2 -36.254113 9.489236 360.466144 0.580701 29.516345 2 True 12
8 LightGBM_BAG_L1 -36.586278 1.629826 36.078389 1.629826 36.078389 1 True 4
9 LightGBMXT_BAG_L1 -36.825093 5.097003 95.284107 5.097003 95.284107 1 True 3
10 XGBoost_BAG_L1 -37.425499 0.381701 34.790274 0.381701 34.790274 1 True 8
11 XGBoost_BAG_L2 -38.165297 9.030482 335.318738 0.121948 4.368939 2 True 16
12 ExtraTreesMSE_BAG_L1 -41.031078 0.479998 4.456622 0.479998 4.456622 1 True 6
13 RandomForestMSE_BAG_L1 -41.324329 0.493389 10.979311 0.493389 10.979311 1 True 5
14 NeuralNetFastAI_BAG_L1 -45.416818 0.418859 149.291520 0.418859 149.291520 1 True 7
15 KNeighborsDist_BAG_L1 -119.372602 0.204595 0.028768 0.204595 0.028768 1 True 2
16 KNeighborsUnif_BAG_L1 -123.921631 0.203163 0.040809 0.203163 0.040809 1 True 1
Number of models trained: 17
Types of models trained:
{'StackerEnsembleModel_RF', 'WeightedEnsembleModel', 'StackerEnsembleModel_KNN', 'StackerEnsembleModel_NNFastAiTabular', 'StackerEnsembleModel_LGB', 'StackerEnsembleModel_XGBoost', 'StackerEnsembleModel_XT', 'StackerEnsembleModel_CatBoost'}
Bagging used: True (with 10 folds)
Multi-layer stack-ensembling used: True (with 3 levels)
Feature Metadata (Processed):
(raw dtype, special dtypes):
('category', []) : 2 | ['season', 'weather']
('float', []) : 3 | ['temp', 'atemp', 'windspeed']
('int', []) : 4 | ['humidity', 'month', 'day', 'hour']
('int', ['bool']) : 3 | ['holiday', 'workingday', 'year']
('int', ['datetime_as_int']) : 1 | ['datetime']
Plot summary of models saved to file: AutogluonModels/ag-20220112_212155/SummaryOfModels.html
*** End of fit() summary ***
{'leaderboard': model score_val ... can_infer fit_order
0 WeightedEnsemble_L3 -35.029454 ... True 17
1 WeightedEnsemble_L2 -35.159474 ... True 9
2 CatBoost_BAG_L2 -35.341252 ... True 13
3 LightGBM_BAG_L2 -35.546448 ... True 11
4 ExtraTreesMSE_BAG_L2 -35.561368 ... True 14
5 NeuralNetFastAI_BAG_L2 -35.824764 ... True 15
6 LightGBMXT_BAG_L2 -36.039664 ... True 10
7 RandomForestMSE_BAG_L2 -36.254113 ... True 12
8 LightGBM_BAG_L1 -36.586278 ... True 4
9 LightGBMXT_BAG_L1 -36.825093 ... True 3
10 XGBoost_BAG_L1 -37.425499 ... True 8
11 XGBoost_BAG_L2 -38.165297 ... True 16
12 ExtraTreesMSE_BAG_L1 -41.031078 ... True 6
13 RandomForestMSE_BAG_L1 -41.324329 ... True 5
14 NeuralNetFastAI_BAG_L1 -45.416818 ... True 7
15 KNeighborsDist_BAG_L1 -119.372602 ... True 2
16 KNeighborsUnif_BAG_L1 -123.921631 ... True 1
[17 rows x 9 columns],
'max_stack_level': 3,
'model_best': 'WeightedEnsemble_L3',
'model_fit_times': {'CatBoost_BAG_L2': 35.26207685470581,
'ExtraTreesMSE_BAG_L1': 4.456622123718262,
'ExtraTreesMSE_BAG_L2': 7.985731601715088,
'KNeighborsDist_BAG_L1': 0.028767824172973633,
'KNeighborsUnif_BAG_L1': 0.04080915451049805,
'LightGBMXT_BAG_L1': 95.2841067314148,
'LightGBMXT_BAG_L2': 10.296600818634033,
'LightGBM_BAG_L1': 36.078389406204224,
'LightGBM_BAG_L2': 8.451789617538452,
'NeuralNetFastAI_BAG_L1': 149.29151964187622,
'NeuralNetFastAI_BAG_L2': 107.74296927452087,
'RandomForestMSE_BAG_L1': 10.979310989379883,
'RandomForestMSE_BAG_L2': 29.51634454727173,
'WeightedEnsemble_L2': 0.4835848808288574,
'WeightedEnsemble_L3': 0.43192458152770996,
'XGBoost_BAG_L1': 34.790273666381836,
'XGBoost_BAG_L2': 4.368938684463501},
'model_hyperparams': {'CatBoost_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'ExtraTreesMSE_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'ExtraTreesMSE_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'KNeighborsDist_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'KNeighborsUnif_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'LightGBMXT_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBMXT_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBM_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBM_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'NeuralNetFastAI_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'NeuralNetFastAI_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'RandomForestMSE_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'RandomForestMSE_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_child_oof': True,
'use_orig_features': True},
'WeightedEnsemble_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': False},
'WeightedEnsemble_L3': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': False},
'XGBoost_BAG_L1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'XGBoost_BAG_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True}},
'model_paths': {'CatBoost_BAG_L2': 'AutogluonModels/ag-20220112_212155/models/CatBoost_BAG_L2/',
'ExtraTreesMSE_BAG_L1': 'AutogluonModels/ag-20220112_212155/models/ExtraTreesMSE_BAG_L1/',
'ExtraTreesMSE_BAG_L2': 'AutogluonModels/ag-20220112_212155/models/ExtraTreesMSE_BAG_L2/',
'KNeighborsDist_BAG_L1': 'AutogluonModels/ag-20220112_212155/models/KNeighborsDist_BAG_L1/',
'KNeighborsUnif_BAG_L1': 'AutogluonModels/ag-20220112_212155/models/KNeighborsUnif_BAG_L1/',
'LightGBMXT_BAG_L1': 'AutogluonModels/ag-20220112_212155/models/LightGBMXT_BAG_L1/',
'LightGBMXT_BAG_L2': 'AutogluonModels/ag-20220112_212155/models/LightGBMXT_BAG_L2/',
'LightGBM_BAG_L1': 'AutogluonModels/ag-20220112_212155/models/LightGBM_BAG_L1/',
'LightGBM_BAG_L2': 'AutogluonModels/ag-20220112_212155/models/LightGBM_BAG_L2/',
'NeuralNetFastAI_BAG_L1': 'AutogluonModels/ag-20220112_212155/models/NeuralNetFastAI_BAG_L1/',
'NeuralNetFastAI_BAG_L2': 'AutogluonModels/ag-20220112_212155/models/NeuralNetFastAI_BAG_L2/',
'RandomForestMSE_BAG_L1': 'AutogluonModels/ag-20220112_212155/models/RandomForestMSE_BAG_L1/',
'RandomForestMSE_BAG_L2': 'AutogluonModels/ag-20220112_212155/models/RandomForestMSE_BAG_L2/',
'WeightedEnsemble_L2': 'AutogluonModels/ag-20220112_212155/models/WeightedEnsemble_L2/',
'WeightedEnsemble_L3': 'AutogluonModels/ag-20220112_212155/models/WeightedEnsemble_L3/',
'XGBoost_BAG_L1': 'AutogluonModels/ag-20220112_212155/models/XGBoost_BAG_L1/',
'XGBoost_BAG_L2': 'AutogluonModels/ag-20220112_212155/models/XGBoost_BAG_L2/'},
'model_performance': {'CatBoost_BAG_L2': -35.34125150329546,
'ExtraTreesMSE_BAG_L1': -41.03107843469637,
'ExtraTreesMSE_BAG_L2': -35.56136786018123,
'KNeighborsDist_BAG_L1': -119.37260178212154,
'KNeighborsUnif_BAG_L1': -123.92163053871438,
'LightGBMXT_BAG_L1': -36.82509279744021,
'LightGBMXT_BAG_L2': -36.03966385509497,
'LightGBM_BAG_L1': -36.5862778241865,
'LightGBM_BAG_L2': -35.546447899817124,
'NeuralNetFastAI_BAG_L1': -45.41681814672156,
'NeuralNetFastAI_BAG_L2': -35.824764099848416,
'RandomForestMSE_BAG_L1': -41.324328695915135,
'RandomForestMSE_BAG_L2': -36.254113011620106,
'WeightedEnsemble_L2': -35.159474353230934,
'WeightedEnsemble_L3': -35.029453623776654,
'XGBoost_BAG_L1': -37.425499141792265,
'XGBoost_BAG_L2': -38.165297416173},
'model_pred_times': {'CatBoost_BAG_L2': 0.0774993896484375,
'ExtraTreesMSE_BAG_L1': 0.4799981117248535,
'ExtraTreesMSE_BAG_L2': 0.5692694187164307,
'KNeighborsDist_BAG_L1': 0.20459532737731934,
'KNeighborsUnif_BAG_L1': 0.20316314697265625,
'LightGBMXT_BAG_L1': 5.09700345993042,
'LightGBMXT_BAG_L2': 0.2739276885986328,
'LightGBM_BAG_L1': 1.629826307296753,
'LightGBM_BAG_L2': 0.14327478408813477,
'NeuralNetFastAI_BAG_L1': 0.41885924339294434,
'NeuralNetFastAI_BAG_L2': 0.4565868377685547,
'RandomForestMSE_BAG_L1': 0.49338865280151367,
'RandomForestMSE_BAG_L2': 0.5807008743286133,
'WeightedEnsemble_L2': 0.0007486343383789062,
'WeightedEnsemble_L3': 0.0007445812225341797,
'XGBoost_BAG_L1': 0.3817005157470703,
'XGBoost_BAG_L2': 0.1219475269317627},
'model_types': {'CatBoost_BAG_L2': 'StackerEnsembleModel_CatBoost',
'ExtraTreesMSE_BAG_L1': 'StackerEnsembleModel_XT',
'ExtraTreesMSE_BAG_L2': 'StackerEnsembleModel_XT',
'KNeighborsDist_BAG_L1': 'StackerEnsembleModel_KNN',
'KNeighborsUnif_BAG_L1': 'StackerEnsembleModel_KNN',
'LightGBMXT_BAG_L1': 'StackerEnsembleModel_LGB',
'LightGBMXT_BAG_L2': 'StackerEnsembleModel_LGB',
'LightGBM_BAG_L1': 'StackerEnsembleModel_LGB',
'LightGBM_BAG_L2': 'StackerEnsembleModel_LGB',
'NeuralNetFastAI_BAG_L1': 'StackerEnsembleModel_NNFastAiTabular',
'NeuralNetFastAI_BAG_L2': 'StackerEnsembleModel_NNFastAiTabular',
'RandomForestMSE_BAG_L1': 'StackerEnsembleModel_RF',
'RandomForestMSE_BAG_L2': 'StackerEnsembleModel_RF',
'WeightedEnsemble_L2': 'WeightedEnsembleModel',
'WeightedEnsemble_L3': 'WeightedEnsembleModel',
'XGBoost_BAG_L1': 'StackerEnsembleModel_XGBoost',
'XGBoost_BAG_L2': 'StackerEnsembleModel_XGBoost'},
'num_bag_folds': 10}
predictions_new_features = predictor_new_features.predict(test)
predictions_new_features = {'datetime': test['datetime'], 'Pred_count': predictions_new_features}
predictions_new_features = pd.DataFrame(data=predictions_new_features)
predictions_new_features.head()
| datetime | Pred_count | |
|---|---|---|
| 0 | 2011-01-20 00:00:00 | 12.691998 |
| 1 | 2011-01-20 01:00:00 | 4.707187 |
| 2 | 2011-01-20 02:00:00 | 3.468102 |
| 3 | 2011-01-20 03:00:00 | 3.434032 |
| 4 | 2011-01-20 04:00:00 | 3.928625 |
# Remember to set all negative values to zero
predictions_new_features[predictions_new_features['Pred_count']<0] = 0
predictions_new_features.describe()
| Pred_count | |
|---|---|
| count | 6493.000000 |
| mean | 190.081589 |
| std | 172.848434 |
| min | 1.221206 |
| 25% | 46.991890 |
| 50% | 147.572662 |
| 75% | 283.675110 |
| max | 870.337036 |
# Same submitting predictions
submission_new_features = pd.read_csv('/content/submission.csv')
submission_new_features["count"] = predictions_new_features['Pred_count']
submission_new_features.to_csv("submission_new_features.csv", index=False)
!kaggle competitions submit -c bike-sharing-demand -f submission_new_features.csv -m "new features"
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4) 100% 188k/188k [00:03<00:00, 56.3kB/s] Successfully submitted to Bike Sharing Demand
!kaggle competitions submissions -c bike-sharing-demand | tail -n +1 | head -n 6
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4) fileName date description status publicScore privateScore --------------------------- ------------------- -------------------- -------- ----------- ------------ submission_new_features.csv 2022-01-12 21:40:38 new features complete 0.47165 0.47165 submission.csv 2022-01-12 20:46:44 first raw submission complete 1.39920 1.39920
0.47165¶hyperparameter and hyperparameter_tune_kwargs arguments.import autogluon.core as ag
## From autogluon documentation
nn_options = {
'dropout_prob': ag.space.Real(0.0, 0.5, default=0.1), # dropout probability
}
gbm_options = {
'num_boost_round': 100, # number of boosting rounds
'num_leaves': ag.space.Int(lower=26, upper=66, default=36), # number of leaves in trees
}
hyperparameters = { # hyperparameters of each model type
'GBM': gbm_options,
'NN': nn_options,
}
num_trials = 3 # try at most 3 different hyperparameter configurations for each type of model
search_strategy = 'auto' # tune hyperparameters using Bayesian optimization routine with a local scheduler
hyperparameter_tune_kwargs = {
'num_trials': num_trials,
'scheduler' : 'local',
'searcher': search_strategy,
}
predictor_new_hpo = TabularPredictor(
label="count", problem_type="regression", eval_metric="rmse"
).fit(
train_data=train.drop(['casual', 'registered'], axis=1),
time_limit=600,
presets='best_quality', hyperparameters=hyperparameters, hyperparameter_tune_kwargs=hyperparameter_tune_kwargs)
No path specified. Models will be saved in: "AutogluonModels/ag-20220112_231546/"
Presets specified: ['best_quality']
Warning: hyperparameter tuning is currently experimental and may cause the process to hang.
Beginning AutoGluon training ... Time limit = 600s
AutoGluon will save models to "AutogluonModels/ag-20220112_231546/"
AutoGluon Version: 0.3.1
Train Data Rows: 10886
Train Data Columns: 13
Preprocessing data ...
Using Feature Generators to preprocess the data ...
Fitting AutoMLPipelineFeatureGenerator...
Available Memory: 11393.05 MB
Train Data (Original) Memory Usage: 0.98 MB (0.0% of available memory)
Inferring data type of each feature based on column values. Set feature_metadata_in to manually specify special dtypes of the features.
Stage 1 Generators:
Fitting AsTypeFeatureGenerator...
Note: Converting 3 features to boolean dtype as they only contain 2 unique values.
Stage 2 Generators:
Fitting FillNaFeatureGenerator...
Stage 3 Generators:
Fitting IdentityFeatureGenerator...
Fitting CategoryFeatureGenerator...
Fitting CategoryMemoryMinimizeFeatureGenerator...
Fitting DatetimeFeatureGenerator...
Stage 4 Generators:
Fitting DropUniqueFeatureGenerator...
Types of features in original data (raw dtype, special dtypes):
('category', []) : 2 | ['season', 'weather']
('datetime', []) : 1 | ['datetime']
('float', []) : 3 | ['temp', 'atemp', 'windspeed']
('int', []) : 7 | ['holiday', 'workingday', 'humidity', 'year', 'month', ...]
Types of features in processed data (raw dtype, special dtypes):
('category', []) : 2 | ['season', 'weather']
('float', []) : 3 | ['temp', 'atemp', 'windspeed']
('int', []) : 4 | ['humidity', 'month', 'day', 'hour']
('int', ['bool']) : 3 | ['holiday', 'workingday', 'year']
('int', ['datetime_as_int']) : 1 | ['datetime']
0.2s = Fit runtime
13 features in original data used to generate 13 features in processed data.
Train Data (Processed) Memory Usage: 0.75 MB (0.0% of available memory)
Data preprocessing and feature engineering runtime = 0.26s ...
AutoGluon will gauge predictive performance using evaluation metric: 'root_mean_squared_error'
To change this, specify the eval_metric argument of fit()
AutoGluon will fit 2 stack levels (L1 to L2) ...
Fitting 2 L1 models ...
Hyperparameter tuning model: LightGBM_BAG_L1 ...
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Fitted model: LightGBM_BAG_L1/T0 ...
-41.2625 = Validation score (root_mean_squared_error)
0.4s = Training runtime
0.01s = Validation runtime
Fitted model: LightGBM_BAG_L1/T1 ...
-119.1295 = Validation score (root_mean_squared_error)
0.41s = Training runtime
0.01s = Validation runtime
Fitted model: LightGBM_BAG_L1/T2 ...
-36.8378 = Validation score (root_mean_squared_error)
0.35s = Training runtime
0.01s = Validation runtime
Hyperparameter tuning model: NeuralNetMXNet_BAG_L1 ...
Ran out of time, stopping training early. (Stopping on epoch 8)
Time limit exceeded
Fitted model: NeuralNetMXNet_BAG_L1/T0 ...
-144.5967 = Validation score (root_mean_squared_error)
12.0s = Training runtime
0.07s = Validation runtime
Fitting model: LightGBM_BAG_L1/T0 ... Training model for up to 384.12s of the 584.13s of remaining time.
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-42.7903 = Validation score (root_mean_squared_error)
4.25s = Training runtime
0.15s = Validation runtime
Fitting model: LightGBM_BAG_L1/T1 ... Training model for up to 379.97s of the 579.98s of remaining time.
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-55.5013 = Validation score (root_mean_squared_error)
4.19s = Training runtime
0.14s = Validation runtime
Fitting model: LightGBM_BAG_L1/T2 ... Training model for up to 375.91s of the 575.92s of remaining time.
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-42.3844 = Validation score (root_mean_squared_error)
4.15s = Training runtime
0.14s = Validation runtime
Fitting model: NeuralNetMXNet_BAG_L1/T0 ... Training model for up to 371.83s of the 571.84s of remaining time.
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 26)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 27)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 28)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 29)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 34)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 32)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 35)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 39)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 46)
-83.6451 = Validation score (root_mean_squared_error)
367.3s = Training runtime
2.47s = Validation runtime
Completed 1/20 k-fold bagging repeats ...
Fitting model: WeightedEnsemble_L2 ... Training model for up to 360.0s of the 214.04s of remaining time.
-42.3844 = Validation score (root_mean_squared_error)
0.28s = Training runtime
0.0s = Validation runtime
Fitting 2 L2 models ...
Hyperparameter tuning model: LightGBM_BAG_L2 ...
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Fitted model: LightGBM_BAG_L2/T0 ...
-41.6996 = Validation score (root_mean_squared_error)
0.47s = Training runtime
0.01s = Validation runtime
Fitted model: LightGBM_BAG_L2/T1 ...
-64.8428 = Validation score (root_mean_squared_error)
0.44s = Training runtime
0.01s = Validation runtime
Fitted model: LightGBM_BAG_L2/T2 ...
-63.2817 = Validation score (root_mean_squared_error)
0.51s = Training runtime
0.01s = Validation runtime
Hyperparameter tuning model: NeuralNetMXNet_BAG_L2 ...
Ran out of time, stopping training early. (Stopping on epoch 3)
Time limit exceeded
Fitted model: NeuralNetMXNet_BAG_L2/T0 ...
-109.0721 = Validation score (root_mean_squared_error)
5.39s = Training runtime
0.08s = Validation runtime
Fitting model: LightGBM_BAG_L2/T0 ... Training model for up to 204.53s of the 204.51s of remaining time.
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-41.3076 = Validation score (root_mean_squared_error)
4.75s = Training runtime
0.13s = Validation runtime
Fitting model: LightGBM_BAG_L2/T1 ... Training model for up to 199.96s of the 199.95s of remaining time.
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-44.1926 = Validation score (root_mean_squared_error)
4.67s = Training runtime
0.14s = Validation runtime
Fitting model: LightGBM_BAG_L2/T2 ... Training model for up to 195.47s of the 195.45s of remaining time.
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
/usr/local/lib/python3.7/dist-packages/lightgbm/engine.py:239: UserWarning: 'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. Pass 'log_evaluation()' callback via 'callbacks' argument instead.
_log_warning("'verbose_eval' argument is deprecated and will be removed in a future release of LightGBM. "
-43.9656 = Validation score (root_mean_squared_error)
4.78s = Training runtime
0.13s = Validation runtime
Fitting model: NeuralNetMXNet_BAG_L2/T0 ... Training model for up to 190.91s of the 190.9s of remaining time.
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 12)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 13)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 13)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 13)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 14)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 15)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 16)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 21)
Attempting to fit model without HPO, but search space is provided. fit() will only consider default hyperparameter values from search space.
Ran out of time, stopping training early. (Stopping on epoch 22)
-56.4045 = Validation score (root_mean_squared_error)
186.33s = Training runtime
2.54s = Validation runtime
Completed 1/20 k-fold bagging repeats ...
Fitting model: WeightedEnsemble_L3 ... Training model for up to 360.0s of the 7.41s of remaining time.
-41.2823 = Validation score (root_mean_squared_error)
0.28s = Training runtime
0.0s = Validation runtime
AutoGluon training complete, total runtime = 592.91s ...
TabularPredictor saved. To load, use: predictor = TabularPredictor.load("AutogluonModels/ag-20220112_231546/")
predictor_new_hpo.fit_summary()
*** Summary of fit() ***
Estimated performance of each model:
model score_val pred_time_val fit_time pred_time_val_marginal fit_time_marginal stack_level can_infer fit_order
0 WeightedEnsemble_L3 -41.282342 5.709038 576.037574 0.000790 0.277052 3 True 10
1 LightGBM_BAG_L2/T0 -41.307625 3.033459 384.648088 0.130620 4.754860 2 True 6
2 LightGBM_BAG_L1/T2 -42.384438 0.139868 4.150900 0.139868 4.150900 1 True 3
3 WeightedEnsemble_L2 -42.384438 0.140630 4.430936 0.000763 0.280037 2 True 5
4 LightGBM_BAG_L1/T0 -42.790304 0.152393 4.254586 0.152393 4.254586 1 True 1
5 LightGBM_BAG_L2/T2 -43.965599 3.033939 384.676854 0.131099 4.783626 2 True 8
6 LightGBM_BAG_L2/T1 -44.192568 3.039248 384.563783 0.136408 4.670555 2 True 7
7 LightGBM_BAG_L1/T1 -55.501303 0.142997 4.189963 0.142997 4.189963 1 True 2
8 NeuralNetMXNet_BAG_L2/T0 -56.404482 5.446529 566.222036 2.543689 186.328808 2 True 9
9 NeuralNetMXNet_BAG_L1/T0 -83.645148 2.467582 367.297780 2.467582 367.297780 1 True 4
Number of models trained: 10
Types of models trained:
{'WeightedEnsembleModel', 'StackerEnsembleModel_LGB', 'StackerEnsembleModel_TabularNeuralNet'}
Bagging used: True (with 10 folds)
Multi-layer stack-ensembling used: True (with 3 levels)
Feature Metadata (Processed):
(raw dtype, special dtypes):
('category', []) : 2 | ['season', 'weather']
('float', []) : 3 | ['temp', 'atemp', 'windspeed']
('int', []) : 4 | ['humidity', 'month', 'day', 'hour']
('int', ['bool']) : 3 | ['holiday', 'workingday', 'year']
('int', ['datetime_as_int']) : 1 | ['datetime']
Plot summary of models saved to file: AutogluonModels/ag-20220112_231546/SummaryOfModels.html
*** End of fit() summary ***
{'leaderboard': model score_val ... can_infer fit_order
0 WeightedEnsemble_L3 -41.282342 ... True 10
1 LightGBM_BAG_L2/T0 -41.307625 ... True 6
2 LightGBM_BAG_L1/T2 -42.384438 ... True 3
3 WeightedEnsemble_L2 -42.384438 ... True 5
4 LightGBM_BAG_L1/T0 -42.790304 ... True 1
5 LightGBM_BAG_L2/T2 -43.965599 ... True 8
6 LightGBM_BAG_L2/T1 -44.192568 ... True 7
7 LightGBM_BAG_L1/T1 -55.501303 ... True 2
8 NeuralNetMXNet_BAG_L2/T0 -56.404482 ... True 9
9 NeuralNetMXNet_BAG_L1/T0 -83.645148 ... True 4
[10 rows x 9 columns],
'max_stack_level': 3,
'model_best': 'WeightedEnsemble_L3',
'model_fit_times': {'LightGBM_BAG_L1/T0': 4.25458550453186,
'LightGBM_BAG_L1/T1': 4.189963340759277,
'LightGBM_BAG_L1/T2': 4.150899648666382,
'LightGBM_BAG_L2/T0': 4.754860162734985,
'LightGBM_BAG_L2/T1': 4.670555114746094,
'LightGBM_BAG_L2/T2': 4.783625602722168,
'NeuralNetMXNet_BAG_L1/T0': 367.2977795600891,
'NeuralNetMXNet_BAG_L2/T0': 186.32880783081055,
'WeightedEnsemble_L2': 0.28003668785095215,
'WeightedEnsemble_L3': 0.2770521640777588},
'model_hyperparams': {'LightGBM_BAG_L1/T0': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBM_BAG_L1/T1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBM_BAG_L1/T2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBM_BAG_L2/T0': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBM_BAG_L2/T1': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'LightGBM_BAG_L2/T2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'NeuralNetMXNet_BAG_L1/T0': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'NeuralNetMXNet_BAG_L2/T0': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': True},
'WeightedEnsemble_L2': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': False},
'WeightedEnsemble_L3': {'max_base_models': 25,
'max_base_models_per_type': 5,
'save_bag_folds': True,
'use_orig_features': False}},
'model_paths': {'LightGBM_BAG_L1/T0': 'AutogluonModels/ag-20220112_231546/models/LightGBM_BAG_L1/T0/',
'LightGBM_BAG_L1/T1': 'AutogluonModels/ag-20220112_231546/models/LightGBM_BAG_L1/T1/',
'LightGBM_BAG_L1/T2': 'AutogluonModels/ag-20220112_231546/models/LightGBM_BAG_L1/T2/',
'LightGBM_BAG_L2/T0': 'AutogluonModels/ag-20220112_231546/models/LightGBM_BAG_L2/T0/',
'LightGBM_BAG_L2/T1': 'AutogluonModels/ag-20220112_231546/models/LightGBM_BAG_L2/T1/',
'LightGBM_BAG_L2/T2': 'AutogluonModels/ag-20220112_231546/models/LightGBM_BAG_L2/T2/',
'NeuralNetMXNet_BAG_L1/T0': 'AutogluonModels/ag-20220112_231546/models/NeuralNetMXNet_BAG_L1/T0/',
'NeuralNetMXNet_BAG_L2/T0': 'AutogluonModels/ag-20220112_231546/models/NeuralNetMXNet_BAG_L2/T0/',
'WeightedEnsemble_L2': 'AutogluonModels/ag-20220112_231546/models/WeightedEnsemble_L2/',
'WeightedEnsemble_L3': 'AutogluonModels/ag-20220112_231546/models/WeightedEnsemble_L3/'},
'model_performance': {'LightGBM_BAG_L1/T0': -42.79030424903357,
'LightGBM_BAG_L1/T1': -55.50130319639482,
'LightGBM_BAG_L1/T2': -42.38443789446749,
'LightGBM_BAG_L2/T0': -41.307625246122534,
'LightGBM_BAG_L2/T1': -44.192567523109574,
'LightGBM_BAG_L2/T2': -43.96559898617842,
'NeuralNetMXNet_BAG_L1/T0': -83.64514828199613,
'NeuralNetMXNet_BAG_L2/T0': -56.40448174439107,
'WeightedEnsemble_L2': -42.38443789446749,
'WeightedEnsemble_L3': -41.282341600971606},
'model_pred_times': {'LightGBM_BAG_L1/T0': 0.15239310264587402,
'LightGBM_BAG_L1/T1': 0.14299726486206055,
'LightGBM_BAG_L1/T2': 0.13986754417419434,
'LightGBM_BAG_L2/T0': 0.13061976432800293,
'LightGBM_BAG_L2/T1': 0.13640809059143066,
'LightGBM_BAG_L2/T2': 0.13109922409057617,
'NeuralNetMXNet_BAG_L1/T0': 2.4675817489624023,
'NeuralNetMXNet_BAG_L2/T0': 2.543689250946045,
'WeightedEnsemble_L2': 0.000762939453125,
'WeightedEnsemble_L3': 0.0007898807525634766},
'model_types': {'LightGBM_BAG_L1/T0': 'StackerEnsembleModel_LGB',
'LightGBM_BAG_L1/T1': 'StackerEnsembleModel_LGB',
'LightGBM_BAG_L1/T2': 'StackerEnsembleModel_LGB',
'LightGBM_BAG_L2/T0': 'StackerEnsembleModel_LGB',
'LightGBM_BAG_L2/T1': 'StackerEnsembleModel_LGB',
'LightGBM_BAG_L2/T2': 'StackerEnsembleModel_LGB',
'NeuralNetMXNet_BAG_L1/T0': 'StackerEnsembleModel_TabularNeuralNet',
'NeuralNetMXNet_BAG_L2/T0': 'StackerEnsembleModel_TabularNeuralNet',
'WeightedEnsemble_L2': 'WeightedEnsembleModel',
'WeightedEnsemble_L3': 'WeightedEnsembleModel'},
'num_bag_folds': 10}
prediction_new_hpo = predictor_new_hpo.predict(test)
prediction_new_hpo = {'datetime': test['datetime'], 'Pred_count': prediction_new_hpo}
prediction_new_hpo = pd.DataFrame(data=prediction_new_hpo)
prediction_new_hpo.head()
| datetime | Pred_count | |
|---|---|---|
| 0 | 2011-01-20 00:00:00 | 12.144185 |
| 1 | 2011-01-20 01:00:00 | 8.368087 |
| 2 | 2011-01-20 02:00:00 | 8.332276 |
| 3 | 2011-01-20 03:00:00 | 8.282018 |
| 4 | 2011-01-20 04:00:00 | 8.296806 |
# Remember to set all negative values to zero
prediction_new_hpo[prediction_new_hpo['Pred_count']<0] = 0
# Same submitting predictions
submission_new_hpo = pd.read_csv('/content/submission.csv')
submission_new_hpo["count"] = prediction_new_hpo['Pred_count']
submission_new_hpo.to_csv("submission_new_hpo.csv", index=False)
!kaggle competitions submit -c bike-sharing-demand -f submission_new_hpo.csv -m "new features with hyperparameters"
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4) 100% 188k/188k [00:03<00:00, 51.0kB/s] Successfully submitted to Bike Sharing Demand
!kaggle competitions submissions -c bike-sharing-demand | tail -n +1 | head -n 6
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4) fileName date description status publicScore privateScore --------------------------- ------------------- --------------------------------- -------- ----------- ------------ submission_new_hpo.csv 2022-01-12 23:27:06 new features with hyperparameters complete 0.50893 0.50893 submission_new_features.csv 2022-01-12 21:40:38 new features complete 0.47165 0.47165 submission.csv 2022-01-12 20:46:44 first raw submission complete 1.39920 1.39920
.50893¶# Taking the top model score from each training run and creating a line plot to show improvement
# You can create these in the notebook and save them to PNG or use some other tool (e.g. google sheets, excel)
fig = pd.DataFrame(
{
"model": ["initial", "add_features", "hpo"],
"score": [114.633955, 35.029454, 41.282342]
}
).plot(x="model", y="score", figsize=(8, 6)).get_figure()
fig.savefig('model_train_score.png')
# Take the 3 kaggle scores and creating a line plot to show improvement
fig = pd.DataFrame(
{
"test_eval": ["initial", "add_features", "hpo"],
"score": [1.39920, 0.47165, 0.50893]
}
).plot(x="test_eval", y="score", figsize=(8, 6)).get_figure()
fig.savefig('model_test_score.png')
# The 3 hyperparameters we tuned with the kaggle score as the result
hyperparams_df = pd.DataFrame({
"model": ["initial_model", "add_features_model", "hpo_model"],
"hpo1": ['default_vals', 'default_vals', 'GBM: num_leaves: lower=26, upper=66'],
"hpo2": ['default_vals', 'default_vals', 'NN: dropout_prob: 0.0, 0.5'],
"hpo3": ['default_vals', 'default_vals', 'GBM: num_boost_round: 100'],
"score": [1.39920, 0.47165, 0.50893]
})
hyperparams_df.head()
| model | hpo1 | hpo2 | hpo3 | score | |
|---|---|---|---|---|---|
| 0 | initial_model | default_vals | default_vals | default_vals | 1.39920 |
| 1 | add_features_model | default_vals | default_vals | default_vals | 0.47165 |
| 2 | hpo_model | GBM: num_leaves: lower=26, upper=66 | NN: dropout_prob: 0.0, 0.5 | GBM: num_boost_round: 100 | 0.50893 |
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
sub_new = pd.read_csv('/content/submission_new_features.csv')
Plot time series of train and test data
import matplotlib.pyplot as plt
series = train["count"].to_numpy()
time = train["datetime"].to_numpy()
plt.figure(figsize=(350, 15))
plot_series(time, series)
plt.title("Train Data time series graph")
#plot_series(time1, series1)
plt.show()
sub_new.loc[:, "datetime"] = pd.to_datetime(sub_new.loc[:, "datetime"])
series1 = sub_new["count"].to_numpy()
time1 = sub_new["datetime"].to_numpy()
plt.figure(figsize=(350, 15))
#plot_series(time, series)
plot_series(time1, series1)
plt.title("Test Data time series graph")
plt.show()
Prediction with XGBoost
import xgboost as xgb
train_df = pd.read_csv('/content/train.csv')
test_df = pd.read_csv('/content/test.csv')
train_df.loc[:, "datetime"] = pd.to_datetime(train_df.loc[:, "datetime"])
test_df.loc[:, "datetime"] = pd.to_datetime(test_df.loc[:, "datetime"])
train_df['year'] = train_df['datetime'].dt.year
train_df['month'] = train_df['datetime'].dt.month
train_df['day'] = train_df['datetime'].dt.day
train_df['hour'] = train_df['datetime'].dt.hour
test_df['year'] = test_df['datetime'].dt.year
test_df['month'] = test_df['datetime'].dt.month
test_df['day'] = test_df['datetime'].dt.day
test_df['hour'] = test_df['datetime'].dt.hour
trainxgb = train_df.drop(['casual', 'registered','count', 'datetime'], axis=1)
trainxgb.head()
| season | holiday | workingday | weather | temp | atemp | humidity | windspeed | year | month | day | hour | |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | 1 | 0 | 0 | 1 | 9.84 | 14.395 | 81 | 0.0 | 2011 | 1 | 1 | 0 |
| 1 | 1 | 0 | 0 | 1 | 9.02 | 13.635 | 80 | 0.0 | 2011 | 1 | 1 | 1 |
| 2 | 1 | 0 | 0 | 1 | 9.02 | 13.635 | 80 | 0.0 | 2011 | 1 | 1 | 2 |
| 3 | 1 | 0 | 0 | 1 | 9.84 | 14.395 | 75 | 0.0 | 2011 | 1 | 1 | 3 |
| 4 | 1 | 0 | 0 | 1 | 9.84 | 14.395 | 75 | 0.0 | 2011 | 1 | 1 | 4 |
countxgb = train_df['count']
countxgb.head()
0 16 1 40 2 32 3 13 4 1 Name: count, dtype: int64
train_xgb = xgb.DMatrix(
trainxgb, countxgb
)
params = {"objective": "reg:linear"} #"objective":"reg:linear"
bst = xgb.train(params, train_xgb)
bst.predict(train_xgb)
[21:17:55] WARNING: ../src/objective/regression_obj.cu:171: reg:linear is now deprecated in favor of reg:squarederror.
array([ 35.479843, 31.946043, 25.44875 , ..., 185.26509 , 141.68748 ,
111.29078 ], dtype=float32)
!jupyter nbconvert --to html bike_sharing.ipynb
[NbConvertApp] Converting notebook bike_sharing.ipynb to html
C:\Users\pjher\anaconda3\lib\site-packages\nbconvert\filters\datatypefilter.py:39: UserWarning: Your element with mimetype(s) dict_keys(['application/vnd.colab-display-data+json']) is not able to be represented.
warn("Your element with mimetype(s) {mimetypes}"
[NbConvertApp] Writing 6035949 bytes to bike_sharing.html